I think foom is a central crux for AI safety folks, and in my personal experience I’ve noticed that the degree to which someone is doomy often correlates strongly with how foomy their views are.
Given this, I thought it would be worth trying to concisely highlight what I think are my central anti-foom beliefs, such that if you were to convince me that I was wrong about them, I would likely become much more foomy, and as a consequence, much more doomy. I’ll start with a definition of foom, and then explain my cruxes.
Definition of foom: AI foom is said to happen if at some point in the future while humans are still mostly in charge, a single agentic AI (or agentic collective of AIs) quickly becomes much more powerful than the rest of civilization combined.
Clarifications:
By “quickly” I mean fast enough that other coalitions and entities in the world, including other AIs, either do not notice it happening until it’s too late, or cannot act to prevent it even if they were motivated to do so.
By “much more powerful than the rest of civilization combined” I mean that the agent could handily beat them in a one-on-one conflict, without taking on a lot of risk.
This definition does not count instances in which a superintelligent AI takes over the world after humans have already been made obsolete by previous waves of automation from non-superintelligent AI. That’s because in that case, the question of how to control an AI foom would be up to our non-superintelligent AI descendants, rather than something we need to solve now.
Core beliefs that make me skeptical of foom:
For an individual AI to be smart enough to foom in something like our current world, its intelligence would need to vastly outstrip individual human intelligence at tech R&D. In other words, if an AI is merely moderately smarter than the smartest humans, that is not sufficient for a foom.
Clarification: “Moderately smarter” can be taken to mean “roughly as smart as GPT-4 is compared to GPT-3.” I don’t consider humans to be only moderately smarter than chimpanzees at tech R&D, since chimpanzees have roughly zero ability to do this task.
Supporting argument:
Plausible stories of foom generally assume that the AI is capable enough to develop some technology “superpower” like full-scale molecular nanotechnology all on its own. However, in the real world almost all technologies are developed from precursor technologies and are only enabled by other tools that must be invented first. Also, developing a technology usually involves a lot of trial and error before it works well.
Raw intelligence is helpful for making the trial and error process go faster, but to get a “superpower” that can beat the rest of world, you need to develop all the pre-requisite tools for the “superpower” first. It’s not enough to simply know how to crack nanotech in principle: you need to completely, and independently from the rest of civilization, invent all the required tools for nanotech, and invent all the tools that are required for building those tools, and so on, all the way down the stack.
It would likely require an extremely high amount tech R&D to invent all the tools down the entire stack from e.g. molecular nanotech, and thus the only way you could do it independently of the rest of civilization is if your intelligence vastly outstripped individual human intelligence. It’s comparable to how hard it would be for a single person to invent modern 2023 microprocessors in 1950, without any of the modern tools we have for building microprocessors.
Key consequence: we aren’t going to get a superintelligence capable of taking over the world as a result of merely scaling up our training budgets 1-3 orders of magnitude above the “human level” with slightly better algorithms, for any concretely identifiable “human level” that will happen any time soon.
To get a foom in something like our current world, either algorithmic progress would need to increase suddenly and dramatically, or we would need to increase compute scaling suddenly and dramatically. In other words, foom won’t simply happen as a result of ordinary rates of progress continuing past the human level for a few more years. Note: this is not a belief I expect most foom adherents to strongly disagree with.
Clarification: by “suddenly and dramatically” I mean at a rate much faster than would be expected given the labor inputs to R&D and by extrapolating past trends; or more concretely, an increase of >4 OOMs of effective compute for the largest training run within 1 year in something like our current world. “Effective compute” refers to training compute adjusted for algorithmic efficiency.
Supporting argument:
Our current rates of progress from GPT-2 --> GPT-3 --> GPT-4 have been rapid, but they have been sustained mostly by increasing compute budgets by 2 OOMs during each iteration. This cannot continue for more than 4 years without training budgets growing to become a significant size of the global economy, which itself would likely require an unprecedented ramp-up in global semiconductor production. Sustaining the trend more than 6 years appears impossible without the economy itself growing rapidly.
Because of (1), an AI would need to vastly exceed human abilities at tech R&D to foom, not merely moderately exceed those abilities. If we take “vastly exceed” to mean more than the jump from GPT-3 to GPT-4, then to get to superintelligence within a few years after human-level, there must be some huge algorithmic speedup that would permit us to use our compute much more efficiently, or a compute overhang with the same effect.
Key consequence: for foom to be plausible, there must be some underlying mechanism, such as recursive self-improvement, that would cause a sudden, dramatic increase in either algorithmic progress or compute scaling in something that looks like our current world. (Note that labor inputs to R&D could increase greatly if AI automates R&D in a general sense, but this looks more like the slow takeoff scenario, see point 4.)
Before widespread automation has already happened, we are unlikely to find a sudden “key insight” that rapidly increases AI performance far above the historical rate, or experience a hardware overhang that has the same effect.
Clarification: “far above the historical rate” can be taken to mean a >4 OOM increase in effective compute within a year, which was the same as what I meant in point (2).
Supporting argument:
Most AI progress has plausibly come from scaling hardware, and from combining several different smaller insights that we’ve accumulated over time from experimentation, rather than sudden key insights.
Many big insights that people point to when talking about rapid jumps in the past often (1) come from a time during which few people were putting in effort to advance the field of AI, (2) turn out to be exaggerated when their effects are quantified, or (3) had clear precursors in the literature, and only became widely used because of the availability of hardware that supported their use, and allowed them to displace an algorithm that previously didn’t scale well. That last point is particularly important, because it points to a reason why we might be biased towards thinking that AI progress is driven primarily by key insights.
Since “scale” is an axis that everyone has an incentive to push hard to the limits, it’s very unclear why we would suddenly leave that option on the table until the end. The idea of a hardware overhang in which one actor suddenly increases the amount of compute they’re using by several orders of magnitude doesn’t seem plausible as of 2023, since companies are already trying to scale up as fast as possible just to sustain the current rate of progress.
Key consequence: it’s unclear why we should assign a high probability to any individual mechanism that could cause foom, since the primary mechanisms appear speculative and out of line with how AI progress has looked historically.
Before we have a system that can foom, the deployment of earlier non-foomy systems will have been fast enough to have already transformed the world. This will have the effect of essentially removing humans from the picture before humans ever need to solve the problem of controlling an AI foom.
Supporting argument:
Deployment of AI seems to happen quite fast. ChatGPT was adopted very rapidly, with a large fraction of society trying it out within the first few weeks after it was released. Future AIs will probably be adopted as fast as, or faster than smartphones were; and deployment times will likely only get faster as the world becomes more networked and interconnected, which has been the trend for many decades now.
Pre-superintelligent AI systems can radically transform the world by widely automating labor, and increasing the rate of economic growth. This will have the effect of displacing humans from positions of power before we build any system that can foom in our current world. It will also increase the bar required for a single system to foom, since the world will be more technologically advanced generally.
Deployment of AI can be slowed due to regulations, deliberate caution, and so on, but if such things happen, we will likely also significantly slow the creation of AI capable of fooming at the same time, especially by slowing the rate at which compute budgets can be scaled. Therefore the overall conclusion remains.
Key consequence: mechanisms like recursive self-improvement can only cause foom if they come earlier than widespread automation from pre-superintelligent systems. If they happen later, humans will already be out of the picture.
Our current rates of progress from GPT-2 --> GPT-3 --> GPT-4 have been rapid, but they have been sustained mostly by increasing compute budgets by 2 OOMs during each iteration.
Do you have a source for the claim that GPT-3 --> GPT-4 was about 2OOM increase in compute budgets? Sam Altman seems to say it was a ~100 different tricks in the Lex Fridman podcast.
AI foom is said to happen if at some point in the future while humans are still mostly in charge [...]
Humans being in charge doesn’t seem central to foom. Like, physically these are wholly unrelated things.
mechanisms like recursive self-improvement can only cause foom if they come earlier than widespread automation from pre-superintelligent systems
Only on the humans-not-in-charge technicality introduced in this definition of foom. Something else being in charge doesn’t change what physically happens as a result of recursive self-improvement.
essentially removing humans from the picture before humans ever need to solve the problem of controlling an AI foom
This doesn’t make the problem of controlling an AI foom go away. The non-foomy systems in charge of the world would still need to solve it.
This doesn’t make the problem of controlling an AI foom go away. The non-foomy systems in charge of the world would still need to solve it.
You’re right, of course, but I don’t think it should be a priority to solve problems that our AI descendants will face, rather than us. It is better to focus on making sure our non-foomy AI descendants have the tools to solve those problems themselves, and that they are properly aligned with our interests.
As non-foomy systems grow more capable, they become the most likely source of foom, so building them causes foom by proxy. At that point, their alignment wouldn’t matter in the same way as current humanity’s alignment wouldn’t matter.
My point is that no system will foom until humans have already left the picture. Actually I doubt that any system will foom even after humans have left the picture, but predicting the very long-run is hard. If no system will foom until humans are already out of the picture, I fail to see why we should make it a priority to try to control a foom now.
I doubt that any system will foom even after humans have left the picture
This seems more like a crux.
Assuming eventual foom, non-foomy things that don’t set up anti-foom security in time only make the foom problem worse, so this abdication of direct responsibility frame doesn’t help. Assuming no foom, there is no need to bother with abdication of direct responsibility. So I don’t see the relevance of the argument you gave in this thread, built around humanity’s direct vs. by-proxy influence over foom.
Assuming eventual foom, non-foomy things that don’t set up anti-foom security in time only make the foom problem worse, so this abdication of direct responsibility frame doesn’t help.
If foom is inevitable, but it won’t happen when humans are still running anything, then what anti-foom security measures can we actually put in place that would help our future descendants handle foom? And does it look any different than ordinary prosaic alignment research?
It looks like building a minimal system that’s non-foomy by design, for the specific purpose of setting up anti-foom security and nothing else. In contrast to starting with more general hopefully-non-foomy hopefully-aligned systems that quickly increase the risk of foom.
Maybe they manage to set up anti-foom security in time. But if we didn’t do it at all, why would they do any better?
It looks like building a minimal system that’s non-foomy by design, for the specific purpose of setting up anti-foom security and nothing else.
Your link for anti-foom security is to the Arbitral article on pivotal acts. I think pivotal acts, almost by definition, assume that foom is achievable in the way that I defined it. That’s because if foom is false, there’s no way you can prevent other people from building AGI after you’ve completed any apparent pivotal act. At most you can delay timelines, by for example imposing ordinary regulations. But you can’t actually have a global indefinite moratorium, enforced by e.g. nanotech that will melt anyone’s GPU who circumvents the ban, in the way implied by the pivotal act framework.
In other words, if you think we can achieve pivotal acts while humans are still running the show, then it sounds like you just disagree with my original argument.
I agree that pivotal act AI is not achievable in anything like our current world before AGI takeover, though I think it remains plausible that with ~20 more years of no-AGI status quo this can change. Even deep learning might do, with enough decision theory to explain what a system is optimizing, interpretability to ensure it’s optimizing the intended thing and nothing else, synthetic datasets to direct its efforts at purely technical problems, and enough compute to get there directly without a need for design-changing self-improvement.
Pivotal act AI is an answer to the question of what AI-shaped intervention would improve on the default trajectory of losing control to non-foomy general AIs (even if we assume/expect their alignment) with respect to an eventual foom. This doesn’t make the intervention feasible without more things changing significantly, like an ordinary decades-long compute moratorium somehow getting its way.
I guess pivotal AI as non-foom again runs afoul of your definition of foom, but it’s noncentral as an example of the concerning concept. It’s not a general intelligence given the features of the design that tell it not to dwell on the real world and ideas outside its task, maybe remaining unaware of the real world altogether. It’s almost certainly easy to modify its design (and datasets) to turn it into a general intelligence, but as designed it’s not. This reduction does make your argument point to it being infeasible right now. But it’s much easier to see that directly, in how much currently unavailable deconfusion and engineering a pivotal act AI design would require.
I think we have radically different ideas of what “moderately smarter” means, and also whether just “smarter” is the only thing that matters.
I’m moderately confident that “as smart as the smartest humans, and substantially faster” would be quite adequate to start a self-improvement chain resulting in AI that is both faster and smarter.
Even the top-human smarts and speed would be enough, if it could be instantiated many times.
I also expect humans to produce AGI that is smarter than us by more than GPT-4 is smarter than GPT-3, quite soon after the first AGI that is as “merely” as smart as us. I think the difference between GPT-3 and GPT-4 is amplified in human perception by how close they are to human intelligence. In my expectation, neither is anywhere near what the existing hardware is capable of, let alone what future hardware might support.
The question is not whether superintelligence is possible, or whether recursive self-improvement can get us there. The question is whether widespread automation will have already transformed the world before the first superintelligence. See point 4.
What do you think of foom arguments built on Baumol effects, such as the one presented in the Davidson takeoff model? The argument being that certain tasks will bottleneck AI productivity, and there will be a sudden explosion in hardware / software / goods & services production when those bottlenecks are finally lifted.
Davidson’s median scenario predicts 6 OOMs of software efficiency and 3 OOMs of hardware efficiency within a single year when 100% automation is reached. Note that this is preceded by five years of double digit GDP growth, so it could be classified with the scenarios you describe in 4.
I think foom is a central crux for AI safety folks, and in my personal experience I’ve noticed that the degree to which someone is doomy often correlates strongly with how foomy their views are.
Given this, I thought it would be worth trying to concisely highlight what I think are my central anti-foom beliefs, such that if you were to convince me that I was wrong about them, I would likely become much more foomy, and as a consequence, much more doomy. I’ll start with a definition of foom, and then explain my cruxes.
Definition of foom: AI foom is said to happen if at some point in the future while humans are still mostly in charge, a single agentic AI (or agentic collective of AIs) quickly becomes much more powerful than the rest of civilization combined.
Clarifications:
By “quickly” I mean fast enough that other coalitions and entities in the world, including other AIs, either do not notice it happening until it’s too late, or cannot act to prevent it even if they were motivated to do so.
By “much more powerful than the rest of civilization combined” I mean that the agent could handily beat them in a one-on-one conflict, without taking on a lot of risk.
This definition does not count instances in which a superintelligent AI takes over the world after humans have already been made obsolete by previous waves of automation from non-superintelligent AI. That’s because in that case, the question of how to control an AI foom would be up to our non-superintelligent AI descendants, rather than something we need to solve now.
Core beliefs that make me skeptical of foom:
For an individual AI to be smart enough to foom in something like our current world, its intelligence would need to vastly outstrip individual human intelligence at tech R&D. In other words, if an AI is merely moderately smarter than the smartest humans, that is not sufficient for a foom.
Clarification: “Moderately smarter” can be taken to mean “roughly as smart as GPT-4 is compared to GPT-3.” I don’t consider humans to be only moderately smarter than chimpanzees at tech R&D, since chimpanzees have roughly zero ability to do this task.
Supporting argument:
Plausible stories of foom generally assume that the AI is capable enough to develop some technology “superpower” like full-scale molecular nanotechnology all on its own. However, in the real world almost all technologies are developed from precursor technologies and are only enabled by other tools that must be invented first. Also, developing a technology usually involves a lot of trial and error before it works well.
Raw intelligence is helpful for making the trial and error process go faster, but to get a “superpower” that can beat the rest of world, you need to develop all the pre-requisite tools for the “superpower” first. It’s not enough to simply know how to crack nanotech in principle: you need to completely, and independently from the rest of civilization, invent all the required tools for nanotech, and invent all the tools that are required for building those tools, and so on, all the way down the stack.
It would likely require an extremely high amount tech R&D to invent all the tools down the entire stack from e.g. molecular nanotech, and thus the only way you could do it independently of the rest of civilization is if your intelligence vastly outstripped individual human intelligence. It’s comparable to how hard it would be for a single person to invent modern 2023 microprocessors in 1950, without any of the modern tools we have for building microprocessors.
Key consequence: we aren’t going to get a superintelligence capable of taking over the world as a result of merely scaling up our training budgets 1-3 orders of magnitude above the “human level” with slightly better algorithms, for any concretely identifiable “human level” that will happen any time soon.
To get a foom in something like our current world, either algorithmic progress would need to increase suddenly and dramatically, or we would need to increase compute scaling suddenly and dramatically. In other words, foom won’t simply happen as a result of ordinary rates of progress continuing past the human level for a few more years. Note: this is not a belief I expect most foom adherents to strongly disagree with.
Clarification: by “suddenly and dramatically” I mean at a rate much faster than would be expected given the labor inputs to R&D and by extrapolating past trends; or more concretely, an increase of >4 OOMs of effective compute for the largest training run within 1 year in something like our current world. “Effective compute” refers to training compute adjusted for algorithmic efficiency.
Supporting argument:
Our current rates of progress from GPT-2 --> GPT-3 --> GPT-4 have been rapid, but they have been sustained mostly by increasing compute budgets by 2 OOMs during each iteration. This cannot continue for more than 4 years without training budgets growing to become a significant size of the global economy, which itself would likely require an unprecedented ramp-up in global semiconductor production. Sustaining the trend more than 6 years appears impossible without the economy itself growing rapidly.
Because of (1), an AI would need to vastly exceed human abilities at tech R&D to foom, not merely moderately exceed those abilities. If we take “vastly exceed” to mean more than the jump from GPT-3 to GPT-4, then to get to superintelligence within a few years after human-level, there must be some huge algorithmic speedup that would permit us to use our compute much more efficiently, or a compute overhang with the same effect.
Key consequence: for foom to be plausible, there must be some underlying mechanism, such as recursive self-improvement, that would cause a sudden, dramatic increase in either algorithmic progress or compute scaling in something that looks like our current world. (Note that labor inputs to R&D could increase greatly if AI automates R&D in a general sense, but this looks more like the slow takeoff scenario, see point 4.)
Before widespread automation has already happened, we are unlikely to find a sudden “key insight” that rapidly increases AI performance far above the historical rate, or experience a hardware overhang that has the same effect.
Clarification: “far above the historical rate” can be taken to mean a >4 OOM increase in effective compute within a year, which was the same as what I meant in point (2).
Supporting argument:
Most AI progress has plausibly come from scaling hardware, and from combining several different smaller insights that we’ve accumulated over time from experimentation, rather than sudden key insights.
Many big insights that people point to when talking about rapid jumps in the past often (1) come from a time during which few people were putting in effort to advance the field of AI, (2) turn out to be exaggerated when their effects are quantified, or (3) had clear precursors in the literature, and only became widely used because of the availability of hardware that supported their use, and allowed them to displace an algorithm that previously didn’t scale well. That last point is particularly important, because it points to a reason why we might be biased towards thinking that AI progress is driven primarily by key insights.
Since “scale” is an axis that everyone has an incentive to push hard to the limits, it’s very unclear why we would suddenly leave that option on the table until the end. The idea of a hardware overhang in which one actor suddenly increases the amount of compute they’re using by several orders of magnitude doesn’t seem plausible as of 2023, since companies are already trying to scale up as fast as possible just to sustain the current rate of progress.
Key consequence: it’s unclear why we should assign a high probability to any individual mechanism that could cause foom, since the primary mechanisms appear speculative and out of line with how AI progress has looked historically.
Before we have a system that can foom, the deployment of earlier non-foomy systems will have been fast enough to have already transformed the world. This will have the effect of essentially removing humans from the picture before humans ever need to solve the problem of controlling an AI foom.
Supporting argument:
Deployment of AI seems to happen quite fast. ChatGPT was adopted very rapidly, with a large fraction of society trying it out within the first few weeks after it was released. Future AIs will probably be adopted as fast as, or faster than smartphones were; and deployment times will likely only get faster as the world becomes more networked and interconnected, which has been the trend for many decades now.
Pre-superintelligent AI systems can radically transform the world by widely automating labor, and increasing the rate of economic growth. This will have the effect of displacing humans from positions of power before we build any system that can foom in our current world. It will also increase the bar required for a single system to foom, since the world will be more technologically advanced generally.
Deployment of AI can be slowed due to regulations, deliberate caution, and so on, but if such things happen, we will likely also significantly slow the creation of AI capable of fooming at the same time, especially by slowing the rate at which compute budgets can be scaled. Therefore the overall conclusion remains.
Key consequence: mechanisms like recursive self-improvement can only cause foom if they come earlier than widespread automation from pre-superintelligent systems. If they happen later, humans will already be out of the picture.
Do you have a source for the claim that GPT-3 --> GPT-4 was about 2OOM increase in compute budgets? Sam Altman seems to say it was a ~100 different tricks in the Lex Fridman podcast.
Humans being in charge doesn’t seem central to foom. Like, physically these are wholly unrelated things.
Only on the humans-not-in-charge technicality introduced in this definition of foom. Something else being in charge doesn’t change what physically happens as a result of recursive self-improvement.
This doesn’t make the problem of controlling an AI foom go away. The non-foomy systems in charge of the world would still need to solve it.
You’re right, of course, but I don’t think it should be a priority to solve problems that our AI descendants will face, rather than us. It is better to focus on making sure our non-foomy AI descendants have the tools to solve those problems themselves, and that they are properly aligned with our interests.
As non-foomy systems grow more capable, they become the most likely source of foom, so building them causes foom by proxy. At that point, their alignment wouldn’t matter in the same way as current humanity’s alignment wouldn’t matter.
My point is that no system will foom until humans have already left the picture. Actually I doubt that any system will foom even after humans have left the picture, but predicting the very long-run is hard. If no system will foom until humans are already out of the picture, I fail to see why we should make it a priority to try to control a foom now.
This seems more like a crux.
Assuming eventual foom, non-foomy things that don’t set up anti-foom security in time only make the foom problem worse, so this abdication of direct responsibility frame doesn’t help. Assuming no foom, there is no need to bother with abdication of direct responsibility. So I don’t see the relevance of the argument you gave in this thread, built around humanity’s direct vs. by-proxy influence over foom.
If foom is inevitable, but it won’t happen when humans are still running anything, then what anti-foom security measures can we actually put in place that would help our future descendants handle foom? And does it look any different than ordinary prosaic alignment research?
It looks like building a minimal system that’s non-foomy by design, for the specific purpose of setting up anti-foom security and nothing else. In contrast to starting with more general hopefully-non-foomy hopefully-aligned systems that quickly increase the risk of foom.
Maybe they manage to set up anti-foom security in time. But if we didn’t do it at all, why would they do any better?
Your link for anti-foom security is to the Arbitral article on pivotal acts. I think pivotal acts, almost by definition, assume that foom is achievable in the way that I defined it. That’s because if foom is false, there’s no way you can prevent other people from building AGI after you’ve completed any apparent pivotal act. At most you can delay timelines, by for example imposing ordinary regulations. But you can’t actually have a global indefinite moratorium, enforced by e.g. nanotech that will melt anyone’s GPU who circumvents the ban, in the way implied by the pivotal act framework.
In other words, if you think we can achieve pivotal acts while humans are still running the show, then it sounds like you just disagree with my original argument.
I agree that pivotal act AI is not achievable in anything like our current world before AGI takeover, though I think it remains plausible that with ~20 more years of no-AGI status quo this can change. Even deep learning might do, with enough decision theory to explain what a system is optimizing, interpretability to ensure it’s optimizing the intended thing and nothing else, synthetic datasets to direct its efforts at purely technical problems, and enough compute to get there directly without a need for design-changing self-improvement.
Pivotal act AI is an answer to the question of what AI-shaped intervention would improve on the default trajectory of losing control to non-foomy general AIs (even if we assume/expect their alignment) with respect to an eventual foom. This doesn’t make the intervention feasible without more things changing significantly, like an ordinary decades-long compute moratorium somehow getting its way.
I guess pivotal AI as non-foom again runs afoul of your definition of foom, but it’s noncentral as an example of the concerning concept. It’s not a general intelligence given the features of the design that tell it not to dwell on the real world and ideas outside its task, maybe remaining unaware of the real world altogether. It’s almost certainly easy to modify its design (and datasets) to turn it into a general intelligence, but as designed it’s not. This reduction does make your argument point to it being infeasible right now. But it’s much easier to see that directly, in how much currently unavailable deconfusion and engineering a pivotal act AI design would require.
I think we have radically different ideas of what “moderately smarter” means, and also whether just “smarter” is the only thing that matters.
I’m moderately confident that “as smart as the smartest humans, and substantially faster” would be quite adequate to start a self-improvement chain resulting in AI that is both faster and smarter.
Even the top-human smarts and speed would be enough, if it could be instantiated many times.
I also expect humans to produce AGI that is smarter than us by more than GPT-4 is smarter than GPT-3, quite soon after the first AGI that is as “merely” as smart as us. I think the difference between GPT-3 and GPT-4 is amplified in human perception by how close they are to human intelligence. In my expectation, neither is anywhere near what the existing hardware is capable of, let alone what future hardware might support.
The question is not whether superintelligence is possible, or whether recursive self-improvement can get us there. The question is whether widespread automation will have already transformed the world before the first superintelligence. See point 4.
What do you think of foom arguments built on Baumol effects, such as the one presented in the Davidson takeoff model? The argument being that certain tasks will bottleneck AI productivity, and there will be a sudden explosion in hardware / software / goods & services production when those bottlenecks are finally lifted.
Davidson’s median scenario predicts 6 OOMs of software efficiency and 3 OOMs of hardware efficiency within a single year when 100% automation is reached. Note that this is preceded by five years of double digit GDP growth, so it could be classified with the scenarios you describe in 4.