If we assume mores law of doubling every 18 months, and that the AI training to runtime ratio is similar to humans then the total compute you can get from always having run a program on a machine of price X is about equal to 2 years of compute on a current machine of price X. Another way of phrasing this is that if you want as much compute as possible done by some date, and you have a fixed budget, you should by your computer 2 years before the date. (If you bought it 50 years before, it would be an obsolete pile of junk, if you bought it 5 minutes before, it would only have 5 minutes to compute)
Therefore, in a hardware limited situation, your AI will have been training for about 2 years. So if your AI takes 20 subjective years to train, it is running at 10x human speed. If the AI development process involved trying 100 variations and then picking the one that works best, then your AI can run at 1000x human speed.
I think the scenario you describe is somewhat plausible, but not the most likely option because I don’t think we will be hardware limited. At the moment, current supercomputers seem to have around enough compute to simulate every synapse in a human brain with floating point arithmetic, in real time. (Based on 1014 synapses at 100 Hz, 1017 flops) I doubt using accurate serial floating point operations to simulate noisy analogue neurons, as arranged by evolution is anywhere near optimal. I also think that we don’t know enough about the software. We don’t currently have anything like an algorithm just waiting for hardware. Still if some unexpectedly fast algoritmic progress happened in the next few years, we could get a moof. Or if algorithmic progress moved in a particular direction later.
I really like this this response! We are thinking about some of the same math.
Some minor quibbles, and again I think “years” not “weeks” is an appropriate time-frame for “first Human AI → AI surpasses all humans”
Therefore, in a hardware limited situation, your AI will have been training for about 2 years. So if your AI takes 20 subjective years to train, it is running at 10x human speed. If the AI development process involved trying 100 variations and then picking the one that works best, then your AI can run at 1000x human speed.
A three-year-old child does not take 20 subjective years to train. Even a 20-year-old adult human does not take 20 subjective years to train. We spend an awful lot of time sleeping, watching TV, etc. I doubt literally every second of that is mandatory for reaching the intelligence of an average adult human being.
At the moment, current supercomputers seem to have around enough compute to simulate every synapse in a human brain with floating point arithmetic, in real time. (Based on 1014 synapses at 100 Hz, 1017 flops) I doubt using accurate serial floating point operations to simulate noisy analogue neurons, as arranged by evolution is anywhere near optimal.
I think just the opposite. A synapse is not a FLOP. My estimate is closer to 10^19. Moreover most of the top slots in the TOP500 list are vanity projects by governments or used for stuff like simulating nuclear explosions.
Although, to be fair, once this curve collides with Moore’s law, that 2nd objection will no longer be true.
If we assume mores law of doubling every 18 months, and that the AI training to runtime ratio is similar to humans then the total compute you can get from always having run a program on a machine of price X is about equal to 2 years of compute on a current machine of price X. Another way of phrasing this is that if you want as much compute as possible done by some date, and you have a fixed budget, you should by your computer 2 years before the date. (If you bought it 50 years before, it would be an obsolete pile of junk, if you bought it 5 minutes before, it would only have 5 minutes to compute)
Therefore, in a hardware limited situation, your AI will have been training for about 2 years. So if your AI takes 20 subjective years to train, it is running at 10x human speed. If the AI development process involved trying 100 variations and then picking the one that works best, then your AI can run at 1000x human speed.
I think the scenario you describe is somewhat plausible, but not the most likely option because I don’t think we will be hardware limited. At the moment, current supercomputers seem to have around enough compute to simulate every synapse in a human brain with floating point arithmetic, in real time. (Based on 1014 synapses at 100 Hz, 1017 flops) I doubt using accurate serial floating point operations to simulate noisy analogue neurons, as arranged by evolution is anywhere near optimal. I also think that we don’t know enough about the software. We don’t currently have anything like an algorithm just waiting for hardware. Still if some unexpectedly fast algoritmic progress happened in the next few years, we could get a moof. Or if algorithmic progress moved in a particular direction later.
I really like this this response! We are thinking about some of the same math.
Some minor quibbles, and again I think “years” not “weeks” is an appropriate time-frame for “first Human AI → AI surpasses all humans”
A three-year-old child does not take 20 subjective years to train. Even a 20-year-old adult human does not take 20 subjective years to train. We spend an awful lot of time sleeping, watching TV, etc. I doubt literally every second of that is mandatory for reaching the intelligence of an average adult human being.
I think just the opposite. A synapse is not a FLOP. My estimate is closer to 10^19. Moreover most of the top slots in the TOP500 list are vanity projects by governments or used for stuff like simulating nuclear explosions.
Although, to be fair, once this curve collides with Moore’s law, that 2nd objection will no longer be true.