[Question] What’s the evidence that LLMs will scale up efficiently beyond GPT4? i.e. couldn’t GPT5, etc., be very inefficient?

A lot of this recent talk about OpenAI, various events, their future path, etc., seems to make an assumption that further scaling beyond GPT4 will pose some sort of ‘danger’ that scales up linearly or super-linearly with the amount of compute. And thus pose an extraordinary danger if you plug in 100x and so on.

Which doesn’t seem obvious at all.

It seems quite possible that GPT5, and further improvements, will be very inefficient.

For example, a GPT5 that requires 10x more compute but is only moderately better. A GPT6 that requires 10x more compute than GPT5, i.e. 100x more than GPT4, but is again only moderately better.

In this case there doesn’t seem to be any serious dangers at all for LLMs.

The problem self-extinguishes based on the fact that random people won’t be able to acquire that amount of compute in the for-seeable future. Only serious governments, institutions, and companies, with multi-billion dollar cap-ex budgets will even be able to consider acquiring something much better.

And although such organizations can’t be considered to be perfectly responsible, they will still very likely be responsible enough to handle LLMs that are only a few times more intelligent.