I once conjectured that
Studying a subject gets progressively harder as you learn more and more, and the effort required is conjectured to be exponential or worse … the initial ‘honeymoon’ phase tends to peter out eventually.
In terms of AI this would mean that the model size/power consumption would be exponential in “intelligence” (whatever it might mean, probably some unsaturated benchmark score). Do the last 3 years confirm or refute this?
If confirmed, would it not give us some optimism that we are not all gonna die, because the “true” superintelligence we cannot ever hope to control would require so much resources, we would have to colonize the lightcone as non-superintelligent humans to get there?
I use the top 4-5 models for fun and profit several hours a day, and my distinct impressions is that they do not CARE. They LARP as a human, but they have no drive and no values. These might emerge at some point, but I have not seen any progress in the last year or so. Then again, progress is discontinuous and hard to anticipate. We are lucky that is the case. In the famous Yud-Karn debate from over a decade ago so far Holden is right: we get smart tools, not true agents, despite all the buzzwords. They don’t care about what is true, or accurate, they hallucinate the moment you are not looking, or don’t close the feedback loop yourself. They could. But they do not. Seems like a small price to pay for non-extinction though.