I think the most important implication, assuming that this assumption is right below, is that it would be better to regulate use of AI and superintelligence, rather than assuming that we realistically can slow it down significantly or stop it.
This implies that AI governance should focus less on slowdowns and more on how to make civilization thrive even under progress in AI intelligence.
In particular, I think this is going to cause a stir in the AI governance people because one of it’s conclusions go against a central belief of AI governance people: That slowing down AI is the best course of action.
AI is like the bomb: the impossible vs. inevitable phenomenon.
Like the case of the bomb, I think building AI systems, including AGI, will become significantly easier with time, and the plot above will also hold for AI. Hardware, engineering, and algorithmic insights will make AIs more capable and cheaper to build and deploy. In particular, I believe that we will discover more efficient ways to train AI systems than the current “brute force” approach of using O(N²) operations to train an N-sized model on O(N) pieces of data; since each data point, on average, contributes O(1) bits of information to the model, it should not require Ω(N) time to process it.
Being the first mover will require significant expenses but might also confer significant advantages. This does mean that probably any type of “pauses” or “delays” are unlikely to make a long-term difference but may create more of an “overhang” effect, where multiple parties (companies/countries) can achieve similar capabilities at about the same time. For example, if the Manhattan Project didn’t exist, the Atomic bomb would have been delayed by several years, but when it was built, it would have likely been done by multiple countries. Whether such a “multipolar” scenario is good or bad is hard to predict in advance. Ultimately what can be built will be built, and I believe regulations and policies can have a minimal impact on the capabilities of AI systems in the long run. But like in the nuclear case, the decisions of first-movers, regulations, and research investments we make today can profoundly impact humanity’s future trajectory. It was not pre-ordained that we would live in a world with more than 3,000 thermonuclear missiles ready to launch at a second’s notice. It was also not pre-ordained that 75 years after Hiroshima and Nagasaki, only nine countries would have nuclear weapons, and no such weapon has been used in war since. Technology might be inevitable, but the way we use it isn’t.
I think the most important implication, assuming that this assumption is right below, is that it would be better to regulate use of AI and superintelligence, rather than assuming that we realistically can slow it down significantly or stop it.
This implies that AI governance should focus less on slowdowns and more on how to make civilization thrive even under progress in AI intelligence.
In particular, I think this is going to cause a stir in the AI governance people because one of it’s conclusions go against a central belief of AI governance people: That slowing down AI is the best course of action.
I indeed believe that regulation should focus on deployment rather than on training.