First, the global “AI models + human development teams” system improves through iterative development and evaluation. Then the AI models take on more responsibilities in terms of ideation, process streamlining, and architecture optimization. And finally, an AI agent groks enough of the process to take on all responsibilities, and the intelligence explosion takes off from there.
You’d think someone would try to use AI to automate the production and distribution of necessities to drive the cost of living down toward zero first, but it seems that was just a dream of naive idealism. Oh well. Still, could someone please get on that?
But then again, what are human minds but bags of heuristics themselves? And AI can evolve orders of magnitude faster than we can. Handing over the keys to its own bootstrapping will only accelerate it further.
If the future trajectory to AGI is just “systems of LLMs glued together with some fancy heuristics”, then maybe a plateau in Transformer capabilities will keep things relatively gradual. But I suspect that we are just a paradigm shift or two away from a Generalized Theory of Intelligence. Just figure out how to do predictive coding of abitrary systems, combine it with narrative programming and continual learning, and away we go! Or something like that.
Humans contain the reproductive and hunting instincts. You could call this a bag of heuristics, but it’s heuristics on a different level than AI, and in particular might not be chosen to be transferred to AIs. Furthermore, humans are harder to copy or parallelize, which leads to a different privacy profile compared to AIs.
The trouble with intelligence (both human and artificial and evolution) is that it’s all about regarding the world as an assembly of the familiar. This makes data/experience a major bottleneck for intelligence.
It’s how recursive self-improvement starts out.
First, the global “AI models + human development teams” system improves through iterative development and evaluation. Then the AI models take on more responsibilities in terms of ideation, process streamlining, and architecture optimization. And finally, an AI agent groks enough of the process to take on all responsibilities, and the intelligence explosion takes off from there.
You’d think someone would try to use AI to automate the production and distribution of necessities to drive the cost of living down toward zero first, but it seems that was just a dream of naive idealism. Oh well. Still, could someone please get on that?
I’m imagining a case where there’s no intelligence explosion per se, just bags-of-heuristics AIs with gradually increasing competence.
But then again, what are human minds but bags of heuristics themselves? And AI can evolve orders of magnitude faster than we can. Handing over the keys to its own bootstrapping will only accelerate it further.
If the future trajectory to AGI is just “systems of LLMs glued together with some fancy heuristics”, then maybe a plateau in Transformer capabilities will keep things relatively gradual. But I suspect that we are just a paradigm shift or two away from a Generalized Theory of Intelligence. Just figure out how to do predictive coding of abitrary systems, combine it with narrative programming and continual learning, and away we go! Or something like that.
Humans contain the reproductive and hunting instincts. You could call this a bag of heuristics, but it’s heuristics on a different level than AI, and in particular might not be chosen to be transferred to AIs. Furthermore, humans are harder to copy or parallelize, which leads to a different privacy profile compared to AIs.
The trouble with intelligence (both human and artificial and evolution) is that it’s all about regarding the world as an assembly of the familiar. This makes data/experience a major bottleneck for intelligence.