If I observe AIHHAI does this update my priors towards Fast/Slow Takeoff?
I think it’s pretty clear that AIHHAI accelerates AI development (without Copilot, I would have to write all those lines myself).
However, I think that observing AIHHAI should actually update your priors towards Slow Takeoff (or at least Moderate Takeoff). One reason is because humans are inherently slower than machines, and as Amdahl reminds us if something is composed of a slow thing and a fast thing, it cannot go faster than the slow thing.
The other reason is that AIHHAI should cause you to lower your belief in a threshold effect. The original argument for Foom went something like “if computers can think like humans, and one thing humans can do is make better computers, then once computers are as smart as humans, computers will make even better computers… ergo foom.” In other words, Foom relies on the belief that there is a critical threshold which leads to an intelligence explosion. However in a world where we observe AIHHAI, this is direct evidence against such a critical threshold, since it an example of a sub-human intelligence helping a human-level intelligence to advance AI.
The alternative model to Foom is something like this: “AI development is much like other economic growth, the more resources you have, the faster it goes.” AIHHAI is a specific example of such an economic input, where spending more of something helps us go faster.
Maybe, but couldn’t it also mean that we just haven’t reached the threshold yet? Some period of AIHHAI might be a necessary step or a catalyst toward that threshold. Encountering AIHHAI doesn’t imply that there is no such foom threshold, it could also mean that we just haven’t reached the threshold yet.
Well, I agree that if two worlds I had in mind were 1) foom without real AI progress beforehand 2) continuous progress, then seeing more continuous progress from increased investments should indeed update me towards 2).
The key parameter here is substitutability between capital and labor. In what sense is Human Labor the bottleneck, or is Capital the bottleneck. From the different growth trajectories and substitutability equations you can infer different growth trajectories. (For a paper / video on this see the last paragraph here).
The world in which dalle-2 happens and people start using Github Copilot looks to me like a world where human labour is substitutable by AI labour, which right now is essentially being part of Github Copilot open beta, but in the future might look like capital (paying the product or investing in building the technology yourself). My intuition right now is that big companies are more bottlenecked by ML talent than by capital (cf. the “are we in ai overhang” post explaining how much more capital could Google invest in AI).
Yes, I definitely think that there is quite a bit of overhead in how much more capital businesses could be deploying. GPT-3 is ~$10M, whereas I think that businesses could probably do 2-3OOM more spending if they wanted to (and a Manhattan project would be more like 4OOM bigger/$100B ).
I think you’re confounding two questions:
Does AIHHAI accelerate AI?
If I observe AIHHAI does this update my priors towards Fast/Slow Takeoff?
I think it’s pretty clear that AIHHAI accelerates AI development (without Copilot, I would have to write all those lines myself).
However, I think that observing AIHHAI should actually update your priors towards Slow Takeoff (or at least Moderate Takeoff). One reason is because humans are inherently slower than machines, and as Amdahl reminds us if something is composed of a slow thing and a fast thing, it cannot go faster than the slow thing.
The other reason is that AIHHAI should cause you to lower your belief in a threshold effect. The original argument for Foom went something like “if computers can think like humans, and one thing humans can do is make better computers, then once computers are as smart as humans, computers will make even better computers… ergo foom.” In other words, Foom relies on the belief that there is a critical threshold which leads to an intelligence explosion. However in a world where we observe AIHHAI, this is direct evidence against such a critical threshold, since it an example of a sub-human intelligence helping a human-level intelligence to advance AI.
The alternative model to Foom is something like this: “AI development is much like other economic growth, the more resources you have, the faster it goes.” AIHHAI is a specific example of such an economic input, where spending more of something helps us go faster.
Maybe, but couldn’t it also mean that we just haven’t reached the threshold yet? Some period of AIHHAI might be a necessary step or a catalyst toward that threshold. Encountering AIHHAI doesn’t imply that there is no such foom threshold, it could also mean that we just haven’t reached the threshold yet.
Well, I agree that if two worlds I had in mind were 1) foom without real AI progress beforehand 2) continuous progress, then seeing more continuous progress from increased investments should indeed update me towards 2).
The key parameter here is substitutability between capital and labor. In what sense is Human Labor the bottleneck, or is Capital the bottleneck. From the different growth trajectories and substitutability equations you can infer different growth trajectories. (For a paper / video on this see the last paragraph here).
The world in which dalle-2 happens and people start using Github Copilot looks to me like a world where human labour is substitutable by AI labour, which right now is essentially being part of Github Copilot open beta, but in the future might look like capital (paying the product or investing in building the technology yourself). My intuition right now is that big companies are more bottlenecked by ML talent than by capital (cf. the “are we in ai overhang” post explaining how much more capital could Google invest in AI).
Yes, I definitely think that there is quite a bit of overhead in how much more capital businesses could be deploying. GPT-3 is ~$10M, whereas I think that businesses could probably do 2-3OOM more spending if they wanted to (and a Manhattan project would be more like 4OOM bigger/$100B ).