Right. And I am saying it is impossible, except for the classes of scenarios I mentioned, due to the fact that transformative AI is an attractor state.
There are many possible histories, and many possible algorithms humans could try, or current AI recursively self improving could try.
But the optimization arrow is always in the direction of more powerful AI, and this is recursive. Given sufficient compute it’s always the outcome.
It’s kinda like saying “the explosives on a fission bomb have detonated and the nuclear core is to design spec. What is the probability it doesn’t detonate”.
Essentially 0. It’s impossible. I will acknowledge there is actually a possibility that the physics work out where it fails to have any fission gain and stops, but it is probably so small it won’t happen in the lifespan of the observable universe.
Right. And I am saying it is impossible, except for the classes of scenarios I mentioned, due to the fact that transformative AI is an attractor state.
There are many possible histories, and many possible algorithms humans could try, or current AI recursively self improving could try.
But the optimization arrow is always in the direction of more powerful AI, and this is recursive. Given sufficient compute it’s always the outcome.
It’s kinda like saying “the explosives on a fission bomb have detonated and the nuclear core is to design spec. What is the probability it doesn’t detonate”.
Essentially 0. It’s impossible. I will acknowledge there is actually a possibility that the physics work out where it fails to have any fission gain and stops, but it is probably so small it won’t happen in the lifespan of the observable universe.