Background: almost nothing that most humans do actually requires fluid intelligence. Most people, most of the time, are executing routine cognitive operations. And most of the people who are using their fluid intelligence on the job could do just as well or better if they had a massive memory of case studies to extrapolate from instead of attempting novel reasoning.
Most of earth’s geniuses currently spend most of their time doing routine cognitive operations—pattern matching from their prior experience to solve problems, often in the context of automatable tasks like implementing experiments or solving engineering problems. When those classes of task are automated, it will free up the capacity of the geniuses.
At this point, most of all the work in the world will be automated or in the process of being automated. Science and tech development will be going faster than ever in human history. It will be obvious to the whole world that AI is a really big deal.
Also, it will be obvious to many people that there’s something missing: the AIs are doing more and better design and engineering, faster, than human civilization ever did, and they’re accelerating the science, but they’re not doing the science. There will be enormous financial and strategic incentives to crack that.
Ok, thanks. I think I’ll probably have to chew on this scenario to say much of use. (I mean, I’ve thought about related things, but haven’t asked myself about this scenario.) My initial reaction is skepticism which I think comes from a combo of
LLMs are somewhat less useful than you seem to think
Humans apply somewhat more GI than you seem to think
The most important stuff would still be bottlenecked on human GI and would be hard to accelerate; you don’t just simply “free up” the humans in a super liquid, fungible way
If this were happening, some pretty strong political forces would be at play, including hopefully / kinda probably (??) a strong push to stop the spiral
But I’m not super confident about any of that. It’s strategically relevant but ATM I don’t have much novel perspective to offer, and it seems to need some other expertise (e.g. a good understanding of politics, of science and tech research, and similar).
Why I think that might happen:
Background: almost nothing that most humans do actually requires fluid intelligence. Most people, most of the time, are executing routine cognitive operations. And most of the people who are using their fluid intelligence on the job could do just as well or better if they had a massive memory of case studies to extrapolate from instead of attempting novel reasoning.
Most of earth’s geniuses currently spend most of their time doing routine cognitive operations—pattern matching from their prior experience to solve problems, often in the context of automatable tasks like implementing experiments or solving engineering problems. When those classes of task are automated, it will free up the capacity of the geniuses.
At this point, most of all the work in the world will be automated or in the process of being automated. Science and tech development will be going faster than ever in human history. It will be obvious to the whole world that AI is a really big deal.
Also, it will be obvious to many people that there’s something missing: the AIs are doing more and better design and engineering, faster, than human civilization ever did, and they’re accelerating the science, but they’re not doing the science. There will be enormous financial and strategic incentives to crack that.
Ok, thanks. I think I’ll probably have to chew on this scenario to say much of use. (I mean, I’ve thought about related things, but haven’t asked myself about this scenario.) My initial reaction is skepticism which I think comes from a combo of
LLMs are somewhat less useful than you seem to think
Humans apply somewhat more GI than you seem to think
The most important stuff would still be bottlenecked on human GI and would be hard to accelerate; you don’t just simply “free up” the humans in a super liquid, fungible way
If this were happening, some pretty strong political forces would be at play, including hopefully / kinda probably (??) a strong push to stop the spiral
But I’m not super confident about any of that. It’s strategically relevant but ATM I don’t have much novel perspective to offer, and it seems to need some other expertise (e.g. a good understanding of politics, of science and tech research, and similar).