Yes. Right now, LLMs feel more like a tool than a mind or entity. Adding continual learning will make them feel more like humans, which is intuitively alarming. It will also broaden their deployment, another source of alarm. They’ll become more continuous like a human, instead of ephemeral ghost. More agentic behavior, as a result of improving competence by “learning on the job” (and other relevant improvements), will also push in that direction, making them seem intuitively more like humans. Humans are intuitively extremely dangerous. Weird alien versions of humans are intuitively even more alarming (if you’re not an AI enthusiast or engaged in a culture war with those pesky “doomers”).
I think this will make progress toward RSI. It will grow into a major unhobbling for agent competence in all areas. But it will be slower progress, because we’ll have bad, limited continual learning before we have really good human-like continual learning. So I think it will unlock the dangers of AGI, but at a slower pace that will give us a fighting chance to wake up and take alignment seriously, barely in time.
I’m thinking of next-gen LLM agents with continual as parahuman AI, systems that work roughly like human brains/minds, and work alongside humans.
Yes. Right now, LLMs feel more like a tool than a mind or entity. Adding continual learning will make them feel more like humans, which is intuitively alarming. It will also broaden their deployment, another source of alarm. They’ll become more continuous like a human, instead of ephemeral ghost. More agentic behavior, as a result of improving competence by “learning on the job” (and other relevant improvements), will also push in that direction, making them seem intuitively more like humans. Humans are intuitively extremely dangerous. Weird alien versions of humans are intuitively even more alarming (if you’re not an AI enthusiast or engaged in a culture war with those pesky “doomers”).
I wrote about this in A country of alien idiots in a datacenter: AI progress and public alarm, focusing on impacts on public opinion. I wrote about the technical side more in LLM AGI will have memory, and memory changes alignment.
I think this will make progress toward RSI. It will grow into a major unhobbling for agent competence in all areas. But it will be slower progress, because we’ll have bad, limited continual learning before we have really good human-like continual learning. So I think it will unlock the dangers of AGI, but at a slower pace that will give us a fighting chance to wake up and take alignment seriously, barely in time.
I’m thinking of next-gen LLM agents with continual as parahuman AI, systems that work roughly like human brains/minds, and work alongside humans.