I’m confused because you describe an “argument specifically that you are dispatching with your efficiency arguments”, and the first paragraph sounds like an EY argument, but the 2nd more like my argument. (And ‘dispatching’ is ambiguous)
Ugh yes, I have no idea why I originally formatted it with the second paragraph quoted as I had it originally (which I fully intended as an articulation of your argument, a rebuttal to the first EY-style paragraph). Just a confusing formatting and structure error on my part. Sorry about that, thanks for your patience.
So as a summary, you agree that AI could be trained a bit smarter than humans, but you disagree with the model where AI could suddenly iteratively extract like 6 OOMs better performance on the same hardware it’s running on, all at once, figure out ways to interact with the physical world again within the hardware it’s already training on, and then strike humanity all at once with undetectable nanotech before the training run is even complete.
The inability of the AI to attain 6 OOMs better performance on its training hardwareduring its training run by recursively self-improving its own software is mainly based on physical efficiency limits, and this is why you put such heavy emphasis on them. And the idea that neural net-like structures that are very demanding in terms of compute, energy, space, etc appear to be the only tractable road to superintelligence means that there’s no alternative, much more efficient scheme the neural net form of the AI could find to rewrite itself a fundamentally more efficienct architecture on this scale. Again, you have other arguments to deal with other concerns and to make other predictions about the outcome of training superintelligent AI, but dispatching this specific scenario is where your efficiency arguments are most important.
Yes but I again expect AGI to use continuous learning, so the training run doesn’t really end. But yes I largely agree with that summary.
NN/DL in its various flavors are simply what efficient approx bayesian inference involves, and there are not viable non-equivalent dramatically better alternatives.
Thanks Jacob for talking me through your model. I agree with you that this is a model that EY and others associated with him have put forth. I’ve looked back through Eliezer’s old posts, and he is consistently against the idea that LLMs are the path to superintelligence (not just that they’re not the only path, but he outright denies that superintelligence could come from neural nets).
My update, based on your arguments here, is that any future claim about a mechanism for iterative self-improvement that happens suddenly, on the training hardware and involves > 2 OOMs of improvement, needs to first deal with the objections you are raising here to be a meaningful way of moving the conversation forward.
Ugh yes, I have no idea why I originally formatted it with the second paragraph quoted as I had it originally (which I fully intended as an articulation of your argument, a rebuttal to the first EY-style paragraph). Just a confusing formatting and structure error on my part. Sorry about that, thanks for your patience.
So as a summary, you agree that AI could be trained a bit smarter than humans, but you disagree with the model where AI could suddenly iteratively extract like 6 OOMs better performance on the same hardware it’s running on, all at once, figure out ways to interact with the physical world again within the hardware it’s already training on, and then strike humanity all at once with undetectable nanotech before the training run is even complete.
The inability of the AI to attain 6 OOMs better performance on its training hardware during its training run by recursively self-improving its own software is mainly based on physical efficiency limits, and this is why you put such heavy emphasis on them. And the idea that neural net-like structures that are very demanding in terms of compute, energy, space, etc appear to be the only tractable road to superintelligence means that there’s no alternative, much more efficient scheme the neural net form of the AI could find to rewrite itself a fundamentally more efficienct architecture on this scale. Again, you have other arguments to deal with other concerns and to make other predictions about the outcome of training superintelligent AI, but dispatching this specific scenario is where your efficiency arguments are most important.
Is that correct?
Yes but I again expect AGI to use continuous learning, so the training run doesn’t really end. But yes I largely agree with that summary.
NN/DL in its various flavors are simply what efficient approx bayesian inference involves, and there are not viable non-equivalent dramatically better alternatives.
Thanks Jacob for talking me through your model. I agree with you that this is a model that EY and others associated with him have put forth. I’ve looked back through Eliezer’s old posts, and he is consistently against the idea that LLMs are the path to superintelligence (not just that they’re not the only path, but he outright denies that superintelligence could come from neural nets).
My update, based on your arguments here, is that any future claim about a mechanism for iterative self-improvement that happens suddenly, on the training hardware and involves > 2 OOMs of improvement, needs to first deal with the objections you are raising here to be a meaningful way of moving the conversation forward.