Progress in AI has largely been a function of increasing compute, human software research efforts, and serial time/steps. Throwing more compute at researchers has improved performance both directly and indirectly (e.g. by enabling more experiments, refining evaluation functions in chess, training neural networks, or making algorithms that work best with large compute more attractive).
Historically compute has grown by many orders of magnitude, while human labor applied to AI and supporting software by only a few. And on plausible decompositions of progress (allowing for adjustment of software to current hardware and vice versa), hardware growth accounts for more of the progress over time than human labor input growth.
So if you’re going to use an AI production function for tech forecasting based on inputs (which do relatively OK by the standards tech forecasting), it’s best to use all of compute, labor, and time, but it makes sense for compute to have pride of place and take in more modeling effort and attention, since it’s the biggest source of change (particularly when including software gains downstream of hardware technology and expenditures).
Thinking about hardware has a lot of helpful implications for constraining timelines:
Evolutionary anchors, combined with paleontological and other information (if you’re worried about Rare Earth miracles), mostly cut off extremely high input estimates for AGI development, like Robin Hanson’s, and we can say from known human advantages relative to evolution that credence should be suppressed some distance short of that (moreso with more software progress)
You should have lower a priori credence in smaller-than-insect brains yielding AGI than more middle of the range compute budgets
It lets you see you should concentrate probability mass in the next decade or so because of the rapid scaleup of compute investment (with a supporting argument from the increased growth of AI R&D effort) covering a substantial share of the orders of magnitude between where we are and levels that we should expect are overkill
It gets you likely AGI this century, and on the closer part of that, with a pretty flat prior over orders of magnitude of inputs that will go into success of magnitude of inputs
It suggests lower annual probability later on if Moore’s Law and friends are dead, with stagnant inputs to AI
These are all useful things highlighted by Ajeya’s model, and by earlier work like Moravec’s. In particular, I think Moravec’s forecasting methods are looking pretty good, given the difficulty of the problem. He and Kurzweil (like the computing industry generally) were surprised by the death of Dennard scaling and general price-performance of computing growth slowing, and we’re definitely years behind his forecasts in AI capability, but we are seeing a very compute-intensive AI boom in the right region of compute space. Moravec also did anticipate it would take a lot more compute than one lifetime run to get to AGI. He suggested human-level AGI would be in the vicinity of human-like compute quantities being cheap and available for R&D. This old discussion is flawed, but makes me feel the dialogue is straw-manning Moravec to some extent.
Ajeya’s model puts most of the modeling work on hardware, but it is intentionally expressive enough to let you represent a lot of different views about software research progress, you just have to contribute more of that yourself when adjusting weights on the different scenarios, or effective software contribution year by year. You can even represent a breakdown of the expectation that software and hardware significantly trade off over time, and very specific accounts of the AI software landscape and development paths. Regardless modeling the most importantly changing input to AGI is useful, and I think this dialogue misleads with respect to that by equivocating between hardware not being the only contributing factor and not being an extremely important to dominant driver of progress.
If the balance of opinion of scientists and policymakers (or those who had briefly heard arguments) was that AI catastrophic risk is high, and that this should be a huge social priority, then you could do a lot of things. For example, you could get budgets of tens of billions of dollars for interpretability research, the way governments already provide tens of billions of dollars of subsidies to strengthen their chip industries. Top AI people would be applying to do safety research in huge numbers. People like Bill Gates and Elon Musk who nominally take AI risk seriously would be doing stuff about it, and Musk could have gotten more traction when he tried to make his case to government.
My perception based on many areas of experience is that policymakers and your AI expert survey respondents on the whole think that these risks are too speculative and not compelling enough to outweigh the gains from advancing AI rapidly (your survey respondents state those are much more likely than the harms). In particular, there is much more enthusiasm for the positive gains from AI than your payoff matrix suggests (particularly among AI researchers), and more mutual fear (e.g. the CCP does not want to be overthrown and subjected to trials for crimes against humanity as has happened to some other regimes, and the rest of the world does not want to live under oppressive CCP dictatorship indefinitely).
But you’re proposing that people worried about AI disaster should leapfrog smaller asks of putting a substantial portion of the effort going into accelerating AI into risk mitigation, which we haven’t been able to achieve because of low buy-in on the case for risk, to far more costly and demanding asks (on policymakers’ views, which prioritize subsidizing AI capabilities and geopolitical competition already). But if you can’t get the smaller more cost-effective asks because you don’t have buy-in on your risk model, you’re going to achieve even less by focusing on more extravagant demands with much lower cost-effectiveness that require massive shifts to make a difference (adding $1B to AI safety annual spending is a big multiplier from the current baseline, removing $1B from semiconductor spending is a miniscule proportional decrease).
When your view is the minority view you have to invest in scientific testing to evaluate your view and make the truth more credible, and better communication. You can’t get around failure to convince the world of a problem by just making more extravagant and politically costly demands about how to solve it. It’s like climate activists in 1950 responding to difficulties passing funds for renewable energy R&D or a carbon tax by proposing that the sale of automobiles be banned immediately. It took a lot of scientific data, solidification of scientific consensus, and communication/movement-building over time to get current measures on climate change, and the most effective measures actually passed have been ones that minimized pain to the public (and opposition), like supporting the development of better solar energy.
Another analogy in biology: if you’re worried about engineered pandemics and it’s a struggle to fund extremely cost-effective low-hanging fruit in pandemic prevention, it’s not a better strategy to try to ban all general-purpose biomedical technology research.