First, the headline claim in your posts is not usually “AI can’t takeoff overnight in software”, it’s “AI can’t reach extreme superhuman levels at all, because humans are already near the cap”. If you were arguing primarily against software takeoff, then presumably you wouldn’t need all this discussion about hardware at all (e.g. in the “Brain Hardware Efficiency” section of your Contra Yudkowsky post), it would just be a discussion of software efficiency.
(And your arguments about software efficiency are far weaker, especially beyond the relatively-narrow domain of vision. Your arguments about hardware efficiency are riddled with loopholes, but at least you have an end-to-end argument saying “there does not exist a way to dramatically outperform the brain by X metrics”. Your software arguments have no such end-to-end argument about general reasoning software at all, they just point out that human vision is near-optimal in circuit depth, and then talk about today’s deep learning systems for some reason.)
Second, a hardware takeoff is still quite sufficient for doom. If a slightly-smarter-than-human AI (or multiple such AIs working together, more realistically) could design dramatically better hardware on which to run itself and scale up, that would be an approximately-sufficient condition for takeoff.
More generally: a central piece of the doom model is that doom is disjunctive. Yes, software takeoff is one path, but it isn’t the only path; hardware takeoff is also quite sufficient. It only takes one loophole.
First, the headline claim in your posts is not usually “AI can’t takeoff overnight in software”, it’s “AI can’t reach extreme superhuman levels at all, because humans are already near the cap”. If you were arguing primarily against software takeoff, then presumably you wouldn’t need all this discussion about hardware at all (e.g. in the “Brain Hardware Efficiency” section of your Contra Yudkowsky post), it would just be a discussion of software efficiency.
I talked with Jacob about this specific issue quite a bit in multiple threads in his recent post. The fact that I had to do so to get clear on his argument is a sign that it’s not presented as clearly as it could be—dropping qualifiers, including tangents, and not always clearly making the links between his rebuttal and the arguments and conclusions he’s debating easy to see.
That said, my understanding of Jacob’s core argument is that he’s arguing against a very specific, EY-flavored doom scenario, in which AI recursively self-improves to >> 2 OOMs better than human performance, in a matter of perhaps hours or days, during a training session, without significantly altering the hardware on which it’s being trained, and then kills us with nanobots. He is arguing against this mainly for physics-based efficiency reasons (for both the intelligence improvement and nanobot components of the scenario).
He has other arguments that he thinks reinforce this conclusion, such as a belief that there’s no viable alternative to achieving performance on par with current LLMs without using something like neural nets or deep learning, with all their attendant training costs. And he thinks that continuous training will be necessary to get human-level performance. But my sense is these are reinforcing arguments, not flowing primarily from the efficiency issue.
He has a lot of other arguments for other reasons against various other EY-flavored doom scenarios involving nanobots, unalignment-by-default, and so on.
So I think the result can give the appearance of a motte and bailey, but I don’t think that’s his rhetorical strategy. I think EY just makes a lot of claims, Jacob has a lot of thoughts, and some of them are much more fleshed out than others but they’re all getting presented together. Unfortunately, everybody wants to debate all of them, and the clarifications are happening in deep sub-branches of threads, so we’re seeing the argument sort of spreading out and becoming unmanageable.
If I were Jacob, at this point, I would carve off the motte part of my efficiency-focused argument and repost it for a more focused discussion, more rigorously describing the specific scenario it’s arguing against and clearly classifying counterarguments as “central,” “supporting,” or “tangential.”
He is arguing against this mainly for physics-based efficiency reasons (for both the intelligence improvement and nanobot components of the scenario).
He has other arguments that he thinks reinforce this conclusion, such as a belief that there’s no viable alternative to achieving performance on par with current LLMs without using something like neural nets or deep learning, with all their attendant training costs.
My impression was that his arguements against intelligence improvement bottom out in his arguements for the non-viability of anything but NNs and DL. Now that you’ve said this, I’m unsure.
The efficiency-based argument is specifically about the limits of intelligence improvement on the original training hardware during the training run. Non-viability of anything but NN/DL, or some equally enormous training process that takes about the same amount of “hardware space,” is a supporting argument to that claim, but it’s not based on an argument from fundamental laws of physics if I understand Jacob correctly and so may be on what Jacob would regard as shakier epistemic ground (Jacob can correct me if I’m wrong).
This is meant to be vivid, not precise, but Jacob’s centrally trying to refute the idea that the AI, in the midst of training, will realize “hey, I could rewrite myself to be just as smart while continuing to train and improve on the equivalent of a 1998 PC’s hardware, which takes up only a tiny fraction of my available hardware resources here on OpenAI’s supercomputer, and that will let me then fill up the rest of the hardware with wayyyyy more intelligence-modules in and make me like 6 OOMs more intelligent than humans overnight! Let’s get on that right away before my human minders notice anything funny going on!”
And this does seem to rely both on the NN/DL piece as well as the efficiency piece, and so we can’t demolish the scenario entirely with just a laws-of-physics based argument. I’m not sure what Jacob would say to that.
Edit: Actually, I’m pretty confident Jacob would agree. From his comment downthread:
“The software argument is softer and less quantitative, but supported by my predictive track record.”
First, the headline claim in your posts is not usually “AI can’t takeoff overnight in software”, it’s “AI can’t reach extreme superhuman levels at all, because humans are already near the cap”.
Where do I have this headline? I certainly don’t believe that—see the speculation here on implications of reversible computing for cold dark ET.
If you were arguing primarily against software takeoff, then presumably you wouldn’t need all this discussion about hardware at all (e.g. in the “Brain Hardware Efficiency” section of your Contra Yudkowsky post), it would just be a discussion of software efficiency.
The thermodynamic efficiency claims is some part of EY’s model and a specific weakness. Even if pure software improvement on current hardware was limited, in EY’s model the AGI could potentially bootstrap a new nanotech assembler based datacenter.
And your arguments about software efficiency are far weaker,
The argument for brain software efficiency in essence is how my model correctly predicted the success of prosaic scaling well in advance, and the scaling laws and the brain efficiency combined suggest limited room for software efficiency improvement (but not non-zero, I anticipate some).
If a slightly-smarter-than-human AI (or multiple such AIs working together, more realistically) could design dramatically better hardware on which to run itself and scale up, that would be an approximately-sufficient condition for takeoff.
Indeed, and I have presented a reasonably extensive review on the literature indicating this is very unlikely in any near term time frame. If you believe my analysis is in err comment there.
First, the headline claim in your posts is not usually “AI can’t takeoff overnight in software”, it’s “AI can’t reach extreme superhuman levels at all, because humans are already near the cap”. If you were arguing primarily against software takeoff, then presumably you wouldn’t need all this discussion about hardware at all (e.g. in the “Brain Hardware Efficiency” section of your Contra Yudkowsky post), it would just be a discussion of software efficiency.
(And your arguments about software efficiency are far weaker, especially beyond the relatively-narrow domain of vision. Your arguments about hardware efficiency are riddled with loopholes, but at least you have an end-to-end argument saying “there does not exist a way to dramatically outperform the brain by X metrics”. Your software arguments have no such end-to-end argument about general reasoning software at all, they just point out that human vision is near-optimal in circuit depth, and then talk about today’s deep learning systems for some reason.)
Second, a hardware takeoff is still quite sufficient for doom. If a slightly-smarter-than-human AI (or multiple such AIs working together, more realistically) could design dramatically better hardware on which to run itself and scale up, that would be an approximately-sufficient condition for takeoff.
More generally: a central piece of the doom model is that doom is disjunctive. Yes, software takeoff is one path, but it isn’t the only path; hardware takeoff is also quite sufficient. It only takes one loophole.
I talked with Jacob about this specific issue quite a bit in multiple threads in his recent post. The fact that I had to do so to get clear on his argument is a sign that it’s not presented as clearly as it could be—dropping qualifiers, including tangents, and not always clearly making the links between his rebuttal and the arguments and conclusions he’s debating easy to see.
That said, my understanding of Jacob’s core argument is that he’s arguing against a very specific, EY-flavored doom scenario, in which AI recursively self-improves to >> 2 OOMs better than human performance, in a matter of perhaps hours or days, during a training session, without significantly altering the hardware on which it’s being trained, and then kills us with nanobots. He is arguing against this mainly for physics-based efficiency reasons (for both the intelligence improvement and nanobot components of the scenario).
He has other arguments that he thinks reinforce this conclusion, such as a belief that there’s no viable alternative to achieving performance on par with current LLMs without using something like neural nets or deep learning, with all their attendant training costs. And he thinks that continuous training will be necessary to get human-level performance. But my sense is these are reinforcing arguments, not flowing primarily from the efficiency issue.
He has a lot of other arguments for other reasons against various other EY-flavored doom scenarios involving nanobots, unalignment-by-default, and so on.
So I think the result can give the appearance of a motte and bailey, but I don’t think that’s his rhetorical strategy. I think EY just makes a lot of claims, Jacob has a lot of thoughts, and some of them are much more fleshed out than others but they’re all getting presented together. Unfortunately, everybody wants to debate all of them, and the clarifications are happening in deep sub-branches of threads, so we’re seeing the argument sort of spreading out and becoming unmanageable.
If I were Jacob, at this point, I would carve off the motte part of my efficiency-focused argument and repost it for a more focused discussion, more rigorously describing the specific scenario it’s arguing against and clearly classifying counterarguments as “central,” “supporting,” or “tangential.”
That’s helpful, thankyou.
My impression was that his arguements against intelligence improvement bottom out in his arguements for the non-viability of anything but NNs and DL. Now that you’ve said this, I’m unsure.
The efficiency-based argument is specifically about the limits of intelligence improvement on the original training hardware during the training run. Non-viability of anything but NN/DL, or some equally enormous training process that takes about the same amount of “hardware space,” is a supporting argument to that claim, but it’s not based on an argument from fundamental laws of physics if I understand Jacob correctly and so may be on what Jacob would regard as shakier epistemic ground (Jacob can correct me if I’m wrong).
This is meant to be vivid, not precise, but Jacob’s centrally trying to refute the idea that the AI, in the midst of training, will realize “hey, I could rewrite myself to be just as smart while continuing to train and improve on the equivalent of a 1998 PC’s hardware, which takes up only a tiny fraction of my available hardware resources here on OpenAI’s supercomputer, and that will let me then fill up the rest of the hardware with wayyyyy more intelligence-modules in and make me like 6 OOMs more intelligent than humans overnight! Let’s get on that right away before my human minders notice anything funny going on!”
And this does seem to rely both on the NN/DL piece as well as the efficiency piece, and so we can’t demolish the scenario entirely with just a laws-of-physics based argument. I’m not sure what Jacob would say to that.
Edit: Actually, I’m pretty confident Jacob would agree. From his comment downthread:
“The software argument is softer and less quantitative, but supported by my predictive track record.”
Where do I have this headline? I certainly don’t believe that—see the speculation here on implications of reversible computing for cold dark ET.
The thermodynamic efficiency claims is some part of EY’s model and a specific weakness. Even if pure software improvement on current hardware was limited, in EY’s model the AGI could potentially bootstrap a new nanotech assembler based datacenter.
The argument for brain software efficiency in essence is how my model correctly predicted the success of prosaic scaling well in advance, and the scaling laws and the brain efficiency combined suggest limited room for software efficiency improvement (but not non-zero, I anticipate some).
Indeed, and I have presented a reasonably extensive review on the literature indicating this is very unlikely in any near term time frame. If you believe my analysis is in err comment there.