I definitely agree that his confidence in the idea that AI is significant is unjustifiable, but 0 is a probability, it’s just the extreme of improbability into impossibility.
And that’s coming from me, where I do believe that AI being significant has a pretty high probability.
Right. And I am saying it is impossible, except for the classes of scenarios I mentioned, due to the fact that transformative AI is an attractor state.
There are many possible histories, and many possible algorithms humans could try, or current AI recursively self improving could try.
But the optimization arrow is always in the direction of more powerful AI, and this is recursive. Given sufficient compute it’s always the outcome.
It’s kinda like saying “the explosives on a fission bomb have detonated and the nuclear core is to design spec. What is the probability it doesn’t detonate”.
Essentially 0. It’s impossible. I will acknowledge there is actually a possibility that the physics work out where it fails to have any fission gain and stops, but it is probably so small it won’t happen in the lifespan of the observable universe.
Can you explain why it’s “unjustifiable”? What is a plausible future history, even a possible one, free of apocalypse, where humans plus existing AI systems fail to develop transformative systems by 2100.
I think that I don’t have a plausible story, and I think very high 90%+ confidence in significant impact is reasonable.
But the issue I have is roughly that probabilities of literally 100% or a little lower is unjustifiable due to the fact that we must always have some probability for (Our model is totally wrong.)
I do think very high confidence is justifiable, though.
I accept that having some remaining probability mass for “unknown unknowns” is reasonable. And you can certainly talk about ideas that didn’t work out even though they had advantages and existed 60 years ago. Jetpacks, that sort of thing.
But if you do more than a cursory analysis you will see the gain from a jetpack is you save a bit of time at the risk of your life, absurd fuel consumption, high cost, and deafening noise to your neighbors. Gain isn’t worth it.
The potential gain of better AI unlocks most of the resources of the solar system (via automated machinery that can manufacture more automated machinery) and makes world conquest feasible. It’s literally a “get the technology or lose” situation. All it takes is a belief that another power is close to having AI able to operate self replicating machinery and you either invest in the same tech or lose your entire country. Sort of how right now Google believes either they release a counter to BingGPT or lose their company.
So yeah I don’t see a justification for even 10 percent doubt.
I definitely agree that his confidence in the idea that AI is significant is unjustifiable, but 0 is a probability, it’s just the extreme of improbability into impossibility.
And that’s coming from me, where I do believe that AI being significant has a pretty high probability.
Right. And I am saying it is impossible, except for the classes of scenarios I mentioned, due to the fact that transformative AI is an attractor state.
There are many possible histories, and many possible algorithms humans could try, or current AI recursively self improving could try.
But the optimization arrow is always in the direction of more powerful AI, and this is recursive. Given sufficient compute it’s always the outcome.
It’s kinda like saying “the explosives on a fission bomb have detonated and the nuclear core is to design spec. What is the probability it doesn’t detonate”.
Essentially 0. It’s impossible. I will acknowledge there is actually a possibility that the physics work out where it fails to have any fission gain and stops, but it is probably so small it won’t happen in the lifespan of the observable universe.
Can you explain why it’s “unjustifiable”? What is a plausible future history, even a possible one, free of apocalypse, where humans plus existing AI systems fail to develop transformative systems by 2100.
I think that I don’t have a plausible story, and I think very high 90%+ confidence in significant impact is reasonable.
But the issue I have is roughly that probabilities of literally 100% or a little lower is unjustifiable due to the fact that we must always have some probability for (Our model is totally wrong.)
I do think very high confidence is justifiable, though.
I accept that having some remaining probability mass for “unknown unknowns” is reasonable. And you can certainly talk about ideas that didn’t work out even though they had advantages and existed 60 years ago. Jetpacks, that sort of thing.
But if you do more than a cursory analysis you will see the gain from a jetpack is you save a bit of time at the risk of your life, absurd fuel consumption, high cost, and deafening noise to your neighbors. Gain isn’t worth it.
The potential gain of better AI unlocks most of the resources of the solar system (via automated machinery that can manufacture more automated machinery) and makes world conquest feasible. It’s literally a “get the technology or lose” situation. All it takes is a belief that another power is close to having AI able to operate self replicating machinery and you either invest in the same tech or lose your entire country. Sort of how right now Google believes either they release a counter to BingGPT or lose their company.
So yeah I don’t see a justification for even 10 percent doubt.