9 years since the last comment—I’m interested in how this argument interacts with GPT-4 class LLMs, and “scale is all you need”.
Sure, LLMs are not evolved in the same way as biological systems, so the path towards smarter LLMs aren’t fragile in the way brains are described in this article, where maybe the first augmentation works, but the second leads to psychosis.
But LLMs are trained on writing done by biological systems with intelligence that was evolved with constraints.
So what does this say about the ability to scale up training on this human data in an attempt to reach superhuman intelligence?
This expanded list is great, but is still conspicuously missing white-collar work. Software was already the basis for the trend, so the only new one here that seems to give clear information on human labor impacts would be tesla_fsd.
(And even there replacing human drivers with AI drivers doesn’t seem like it would change much for humanity, compared to lawyers/doctors/accountants/sales/etc.)
Is it the case that for most non-software white-collar work, agents can only do ~10-20 human-minute tasks with any reliability, so the doubling time is hard to measure?