I want to once again try to assess expected time until Strong AI. I will estimate prior probability of AI, and then try to update it based on recent evidences.
At first, I will try to prove the following prior probability of AI: “If AI is possible, it most likely will be built in the 21 century, or it will be proven that the task has some very tough hidden obstacles”.
Arguments for this prior probability:
Science power argument.
We know that humanity was able to solve many very complex tasks in the past, and it took typically around 100 years. That is flight of heavy than air objects, nuclear technologies, space exploration. 100 years is enough for several generations of scientists to concentrate on a complex task and extract everything about it which we could do without some extraordinary insight from outside our knowledge. We are already working on AI for 65 years, by the way.
Moore’s argument
Moore’s law will run out of its power in the 21 century, but this will not stop growth of stronger and stronger computers for a couple decades.
This growth will result from cheaper components, from large number of interconnected computers, for cumulative production of components and from large money investment. It means that even if Moore’s laws stops (that is there will be no more progress in microelectronic chips technology), in 10-20 years from that day the power of most powerful computers in the world will continue to grow, but in lower and lower speed, and may grow 100 −1000 times from the moment of Moore law ending.
But such computer will be very large, power consuming and expensive. They will cost hundreds billions of dollars and consume gigawatts of energy. The biggest computer planned now is 200 petaflops Intel “Summit” and event if Moore law end on it, it means that 20 exaflops computers will be eventually built.
There also several almost unused option: quantum computers, superconducting, FPGA, new ways of parallelization, graphen, memristors, optics, use of genetically modified biological neurons for calculations.
All it means that:
A) 10 power 20 flops computers will be eventually built. (And its is comparable with some estimates of human brain capacity.)
B) They will be built in the 21 century.
C) 21 century will see the biggest advance in computer power compare with any other century and almost all, what could be built, will be built in 21 century and not after.
So, the computer on which AI may run will be built in the 21 century.
“3” Uploading argument
The uploading even of a worm is lagging, but uploading provides upper limit on AI timing. There is no reason to believe that scanning human brain will take more than 100 years.
Conclusion from prior: Flat probability distribution.
If we know for sure that AI will be built in the 21 century, we could give it flat probability, which gives it equal probability to appear in any year, around 1 per cent. (It results in cumulated exponential probability by the way, but we will not concentrate on it now).
We could use this probability as prior for our future updates of it. Now we will consider argument for updating this prior probability.
Updates of the prior probability.
Now we could use this prior probability of AI to estimate timing of AI risks. Before we discussed AI in general, but now we add the word “risk”.
Arguments for rising AI risks probability in near future:
We don’t need a) self improving b) super human с) universal d) world domination AI for extinction catastrophe. All these conditions are not necessary. Extinction is simpler task than friendliness. Even a program which helps to built biological viruses and is local, non self-improving, not agent and specialized could create enormous harm by helping to build hundreds designed pathogens-viruses in the hands of existential terrorists. Extinction-grade AI may be simple. And it also could come earlier in time than full friendly AI. While UFAI may be ultimate risk, we may not be able to survive until it because of simpler form of AIs, almost on the level of computer viruses. In general earlier risks overshadow later risks.
We should take lower estimates of timing of AI arrival based on precautionary principle. Basically this means that we should treat 10 per cent probability of its arrival as 100 per cent.
We may use events of last several years for update our estimation of AI timing. In last years we saw enormous progress in AI based on neural nets. The doubling time of AI efficiency in different test is around 1 year now, and it win on many games (Go, Poker so on). Belief in AI possibility rose in recent years, which result in overhype and large growth in investments as well as many new startups. Specialized hardware for neural nets was built. If such growth will continue for 10-20 years, it would mean 1000- 1 000 000 growth in AI capabilities, which must include reaching of human level AI.
AI is increasingly used to built new AIs. AI writes programs, help to calculate connectome of human brain.
All it means that we should expect human level AI in 10-20 years and superintelligence soon afterwards.
It also means that AI probability is distributed exponentially, from now and until it creation.
The biggest argument against it is also historic: we saw a lot of AI hypes before and they failed to produce meaningful results. AI is always 10 years from now and researchers in AI tend to overestimate it. Humans tend to be overconfident about AI research.
We also are still far from understanding how human brain works, and even simplest question about it may be puzzling.
Another way to assess AI timing is idea that AI is unpredictable black swan event, depending from only one idea to appear (it seems that Yudkowsky think so). If someone gets this idea, AI is here.
In this case we should multiply number of independent AI researchers on number of trails, that is number of new ideas they get. I suggest to think that the last rate is constant. In this case we should estimate the number of active and independent AI researchers. It seems that it is growing fuelled by new funding and hype.
So my conclusion is that if we going to be afraid of AI we should estimate it arrival in 2025-3025 and have our preventive ideas ready and deployed to this time. If we want to hope to use AI in preventing other x-risks or in life extension, we should not expect it until second half of 21 century. We should use earlier estimation for bad AI than for Good AI.
We know that humanity was able to solve many very complex tasks in the past, and it took typically around 100 years. That is flight of heavy than air objects,
That seems to be false. Leonardo da Vinci had drafts of flying machines and it took a lot longer than 100 years to get actual flight.
That is why I use wording “typically around” to show that I meant medium time of large dedicated efforts. Leonardo’s work was not continued by other scientists of 16 century so it was not part of large dedicated effort. It seems that other creators of flying machines tried to invent them from scratch but not based on results of Leonardo.
Even in 19 century collaboration between aviation enthusiasts was very small. Probably they learned from failed attempts of others—like ups, flying wings does not work, lets try rotating wings. if they collaborate more effectively, they could come to the working design quicker.
I think that it is possible to make your argument even stronger: it took hundred thousand years to go from stone age to bronze age.
But is clear that “total intelligence” of humanity is different on any stage of its development, and when I spoke about hundred years, I thought of “total intelligence” of humanity on the level of 20 century.
Anyway in case of AI it is an analogy, and may not work. The AI problem could be extremely complex and even unsolvable, but we could not bet on it if we are going to be well prepared.
Estimation of timing of AI risk
I want to once again try to assess expected time until Strong AI. I will estimate prior probability of AI, and then try to update it based on recent evidences.
At first, I will try to prove the following prior probability of AI: “If AI is possible, it most likely will be built in the 21 century, or it will be proven that the task has some very tough hidden obstacles”. Arguments for this prior probability:
Science power argument. We know that humanity was able to solve many very complex tasks in the past, and it took typically around 100 years. That is flight of heavy than air objects, nuclear technologies, space exploration. 100 years is enough for several generations of scientists to concentrate on a complex task and extract everything about it which we could do without some extraordinary insight from outside our knowledge. We are already working on AI for 65 years, by the way.
Moore’s argument Moore’s law will run out of its power in the 21 century, but this will not stop growth of stronger and stronger computers for a couple decades.
This growth will result from cheaper components, from large number of interconnected computers, for cumulative production of components and from large money investment. It means that even if Moore’s laws stops (that is there will be no more progress in microelectronic chips technology), in 10-20 years from that day the power of most powerful computers in the world will continue to grow, but in lower and lower speed, and may grow 100 −1000 times from the moment of Moore law ending.
But such computer will be very large, power consuming and expensive. They will cost hundreds billions of dollars and consume gigawatts of energy. The biggest computer planned now is 200 petaflops Intel “Summit” and event if Moore law end on it, it means that 20 exaflops computers will be eventually built.
There also several almost unused option: quantum computers, superconducting, FPGA, new ways of parallelization, graphen, memristors, optics, use of genetically modified biological neurons for calculations.
All it means that: A) 10 power 20 flops computers will be eventually built. (And its is comparable with some estimates of human brain capacity.) B) They will be built in the 21 century. C) 21 century will see the biggest advance in computer power compare with any other century and almost all, what could be built, will be built in 21 century and not after.
So, the computer on which AI may run will be built in the 21 century.
“3” Uploading argument
The uploading even of a worm is lagging, but uploading provides upper limit on AI timing. There is no reason to believe that scanning human brain will take more than 100 years.
Conclusion from prior: Flat probability distribution.
If we know for sure that AI will be built in the 21 century, we could give it flat probability, which gives it equal probability to appear in any year, around 1 per cent. (It results in cumulated exponential probability by the way, but we will not concentrate on it now). We could use this probability as prior for our future updates of it. Now we will consider argument for updating this prior probability.
Updates of the prior probability.
Now we could use this prior probability of AI to estimate timing of AI risks. Before we discussed AI in general, but now we add the word “risk”.
Arguments for rising AI risks probability in near future:
We don’t need a) self improving b) super human с) universal d) world domination AI for extinction catastrophe. All these conditions are not necessary. Extinction is simpler task than friendliness. Even a program which helps to built biological viruses and is local, non self-improving, not agent and specialized could create enormous harm by helping to build hundreds designed pathogens-viruses in the hands of existential terrorists. Extinction-grade AI may be simple. And it also could come earlier in time than full friendly AI. While UFAI may be ultimate risk, we may not be able to survive until it because of simpler form of AIs, almost on the level of computer viruses. In general earlier risks overshadow later risks.
We should take lower estimates of timing of AI arrival based on precautionary principle. Basically this means that we should treat 10 per cent probability of its arrival as 100 per cent.
We may use events of last several years for update our estimation of AI timing. In last years we saw enormous progress in AI based on neural nets. The doubling time of AI efficiency in different test is around 1 year now, and it win on many games (Go, Poker so on). Belief in AI possibility rose in recent years, which result in overhype and large growth in investments as well as many new startups. Specialized hardware for neural nets was built. If such growth will continue for 10-20 years, it would mean 1000- 1 000 000 growth in AI capabilities, which must include reaching of human level AI.
AI is increasingly used to built new AIs. AI writes programs, help to calculate connectome of human brain.
All it means that we should expect human level AI in 10-20 years and superintelligence soon afterwards.
It also means that AI probability is distributed exponentially, from now and until it creation.
The biggest argument against it is also historic: we saw a lot of AI hypes before and they failed to produce meaningful results. AI is always 10 years from now and researchers in AI tend to overestimate it. Humans tend to be overconfident about AI research.
We also are still far from understanding how human brain works, and even simplest question about it may be puzzling. Another way to assess AI timing is idea that AI is unpredictable black swan event, depending from only one idea to appear (it seems that Yudkowsky think so). If someone gets this idea, AI is here.
In this case we should multiply number of independent AI researchers on number of trails, that is number of new ideas they get. I suggest to think that the last rate is constant. In this case we should estimate the number of active and independent AI researchers. It seems that it is growing fuelled by new funding and hype.
So my conclusion is that if we going to be afraid of AI we should estimate it arrival in 2025-3025 and have our preventive ideas ready and deployed to this time. If we want to hope to use AI in preventing other x-risks or in life extension, we should not expect it until second half of 21 century. We should use earlier estimation for bad AI than for Good AI.
That seems to be false. Leonardo da Vinci had drafts of flying machines and it took a lot longer than 100 years to get actual flight.
That is why I use wording “typically around” to show that I meant medium time of large dedicated efforts. Leonardo’s work was not continued by other scientists of 16 century so it was not part of large dedicated effort. It seems that other creators of flying machines tried to invent them from scratch but not based on results of Leonardo.
Even in 19 century collaboration between aviation enthusiasts was very small. Probably they learned from failed attempts of others—like ups, flying wings does not work, lets try rotating wings. if they collaborate more effectively, they could come to the working design quicker.
Then take the project of eternal life. A lot of corporating alchimists worked on it for thousands of years.
I think that it is possible to make your argument even stronger: it took hundred thousand years to go from stone age to bronze age.
But is clear that “total intelligence” of humanity is different on any stage of its development, and when I spoke about hundred years, I thought of “total intelligence” of humanity on the level of 20 century.
Anyway in case of AI it is an analogy, and may not work. The AI problem could be extremely complex and even unsolvable, but we could not bet on it if we are going to be well prepared.
Could this medium time even be exponential regarding the whole human evolution?