Injecting some numbers into the AGI debate—by Boaz Barak

Link post

I found this article useful context for Boaz Barak’s more recent article about the probability of AI Doom.

In this article, the author (through very rough BOTECs):

  1. Estimates the number of parameters needed for AGI (tentative conclusion: x10-100 more than GPT-3)

  2. Estimates the cost of training AGI of that scale (tentative conclusion: $50B - $5T)

  3. Estimates the cost of inference for AGI of that scale (tentative conclusion: $10/​hour - $100/​hour)

  4. Notes some challenges that will need to be addressed to reach AGI

  5. In the end there is a reflection on consciousness and AI risk

Here are some excerpts:

Bottom line. Overall, it seems that [we would need to scale model size by] at least 10-100 [to reach human-level AGI], though this is an extremely rough estimate. Also, while an [10-100] Trillion model might be the “core” of a system replacing a human worker, it will not be the whole of it, and we are likely to need new ideas beyond scale for these other components. In particular, a model designed for back-and-forth interaction is unlikely to simply use an auto-regressive language model as a black box, not even with chain-of-thought reasoning.

This estimate is informed by 1) comparing the context length of current models and the context length needed for processing one year of continuous logging (result: x100), 2) evidence of scaling from the BIG BENCH paper (>x10 for human-level performance), 3) a comparison between current parameter counts of the brain and an estimate of the “number of parameters” in the human brain (the comparison is x100-x1000).

It seems unclear how he arrives from these estimates to the x10-x100 estimate. There is also a digression about adaptability and how difficult it will be to go from 95% reliability to 99% reliability.

There are differing estimates on how much ~100B parameter GPT3 model cost to train, but they range in the $5M-$20M, let’s say $5M in pure training costs for simplicity. (This would correspond to a cost of 5M$/​500B = 10-5 dollars per inference, which matches roughly with estimates below.) An X Trillion model (which, like Chinchilla, but unlike PaLM, would be fully trained to max out its advantages) might cost a factor of 100X2 more. For X=10, this would be a cost of $50B. For X=100, this would be 5 Trillion dollars!

Clearly, finding a way to train N-sized models on N tokens using less than O(N2) steps (e.g., O(N log N) ) can be crucial for scaling larger models. [...]

However, the question is how many inferences we need to make per hour to simulate a human. The average person apparently speaks about 150 words (say 200 tokens) per minute. This would suggest we need about 200*60 ~ 10K inferences per hour to simulate a person. For an X Trillion sized model, that would cost $X per hour, which is not too bad if X is between 10 to 100.

The above price point [...] will likely be an underestimate. [...] We don’t know the speed of thought, but it will increase the number of inferences needed. [...] Second, to reach high reliability, it is likely that we will need to make [10 to 100] inferences and use some mechanism to choose the best one out of these [...] options.

I am confused because, taken at face value, these estimates imply a $100/​h to $10k/​h cost of inference, but the stated bottom line is $10/​h to $100/​h.

The cost estimates above explicitly ignore future improvements to hardware, relying on current costs instead.

Some challenges we face include:

(1) Maintaining long-term context without model size exploding.

(2) Making training more efficient, particularly finding ways to train N-sized models at a near-linear cost instead of quadratic in N.

(3) Avoid running out of data, perhaps by using video/​images and programmatically generated interactive simulations.

(4) Handling multi-step interactions without the interaction going “off the rails” and without needing to scale policy/​value gradients to an unfeasible number of interactions (perhaps by using smaller RL-trained “verifier models” to choose between options produced by a large statically-trained model).

The truth is that the reason that our world hasn’t been destroyed so far is not that humans were not intelligent enough nor because we haven’t been malicious enough. First, throughout most of human history, we did not have technologies such as nuclear weapons and others with the potential to cause planet-scale destruction. Second, while imperfect, we have developed some institutions, including international bodies, the non-proliferation treaty, standards for biolabs, pandemic preparations, and more to keep some of these capabilities in check. Third, we were lucky. From climate change through pandemic preparation to nuclear disarmament, humanity should be doing much more to confront the risks and harms of our own making. But this is true independently of artificial intelligence. Just as with humans, my inclination with AI would not to try to make systems inherently moral or good (“aligned” in AI-speak) but rather use the “trust, but verify” approach. One moral of computational complexity theory is that computationally weak agents can verify the computation of more powerful processes.

No comments.