Yes, [Mishka’s description of relatively-slow-foom] matches my point of view as well. When I say that I believe recursive self-improvement can and probably will happen in the next few years, this is what I’m pointing at. I expect the first few generations to each take a few months and be a product of humans and AI systems working together, and that the generational improvements will be less than 2x improvements. I expect that there is perhaps 1 − 3 OOMs of improvement in software alone before getting blocked by needing slow expensive hardware changes. So, the scenario I’m concerned about looks more like a 2 OOM (+/- 1) improvement over 6 −12 months. This is a very different scenario than the 4+ OOM improvement in the first few days of the process beginning which is described in some foom-doom stories.
I agree; a relatively slow “foom” is likely; moreover, the human team(s) doing that will know that this is exactly what they are doing, a “slowish” foom (for 2 OOM (+/-1) per 6-12 months; still way faster than our current rate of progress).
Whether this process can unexpectedly run away from them and explode really fast instead at some point would depend on whether completely unexpected radical algorithmic discoveries will be made in the process (that’s one thing the whole ecosystem of humans+AIs in an organization like that should watch for; they need to have genuine consensus among involved humans and involved AIs to collectively ponder such things before allowing them to accelerate beyond a “slowish” foom to a much faster one; but it’s not certain if the discoveries enabling the really fast one will be made, it’s just a possibility).
Yep, agreed. Stronger-than-expected jump unlikely but possible and should be guarded against.
As for the 2 OOM speed.… I agree, it’s substantially faster than what we’ve been experiencing so far. Think of GPT4 getting 100x stronger/smarter over the course of a year. That’s plenty enough to be scary I think.
Yes, [Mishka’s description of relatively-slow-foom] matches my point of view as well. When I say that I believe recursive self-improvement can and probably will happen in the next few years, this is what I’m pointing at. I expect the first few generations to each take a few months and be a product of humans and AI systems working together, and that the generational improvements will be less than 2x improvements. I expect that there is perhaps 1 − 3 OOMs of improvement in software alone before getting blocked by needing slow expensive hardware changes. So, the scenario I’m concerned about looks more like a 2 OOM (+/- 1) improvement over 6 −12 months. This is a very different scenario than the 4+ OOM improvement in the first few days of the process beginning which is described in some foom-doom stories.
I agree; a relatively slow “foom” is likely; moreover, the human team(s) doing that will know that this is exactly what they are doing, a “slowish” foom (for 2 OOM (+/-1) per 6-12 months; still way faster than our current rate of progress).
Whether this process can unexpectedly run away from them and explode really fast instead at some point would depend on whether completely unexpected radical algorithmic discoveries will be made in the process (that’s one thing the whole ecosystem of humans+AIs in an organization like that should watch for; they need to have genuine consensus among involved humans and involved AIs to collectively ponder such things before allowing them to accelerate beyond a “slowish” foom to a much faster one; but it’s not certain if the discoveries enabling the really fast one will be made, it’s just a possibility).
Yep, agreed. Stronger-than-expected jump unlikely but possible and should be guarded against. As for the 2 OOM speed.… I agree, it’s substantially faster than what we’ve been experiencing so far. Think of GPT4 getting 100x stronger/smarter over the course of a year. That’s plenty enough to be scary I think.