Almost all of the linguistic ‘universals’ are universal to languages, not humans—and would necessarily apply to AI’s who speak our languages
Most of the social ‘universals’ are universal to societies, not humans, and apply just as easily to birds, bees, and dolphins: coalitions, leaders, conflicts?
AI’s will inherit some understanding of all the idiosynchronicities of our complex culture just by learning our language and being immersed in it.
Kolomogrov complexity is not immediately relevant to this point. No matter how large the evolutionary landscape is, there are a small number of stable attractors in that landscape that become ‘universals’, species, parallel evolution, etc etc.
We are not going to create AI’s by randomly sampling mindspace. The only way they could be truly alien is if we evolved a new simulated world from scratch with it’s own evolutionary history and de novo culture and language. But of course that is unrealistic and unuseful on so many levels.
They will necessarily be samples from our mindspace—otherwise they wouldn’t be so useful.
They will necessarily be samples from our mindspace—otherwise they wouldn’t be so useful.
Computers so far have been very different from us. That is partly because they have been built to compensate for our weaknesses—to be strong where we are weak. They compensate for our poor memories, our terrible arithmetic module, our poor long-distance communications skills—and our poor ability at serial tasks. That is how they have managed to find a foothold in society—before maastering nanotechnology.
IMO, we will probably be seeing a considerable amount more of that sort of thing.
Computers so far have been very different from us.
[snip]
Agree with your point, but so far computers have been extensions of our minds and not minds in their own right. And perhaps that trend will continue long enough to delay AGI for a while.
For for AGI, for them to be minds, they will need to think and understand human language—and this is why I say they “will necessarily be samples from our mindspace”.
This list doesn’t really help your point:
Almost all of the linguistic ‘universals’ are universal to languages, not humans—and would necessarily apply to AI’s who speak our languages
Most of the social ‘universals’ are universal to societies, not humans, and apply just as easily to birds, bees, and dolphins: coalitions, leaders, conflicts?
AI’s will inherit some understanding of all the idiosynchronicities of our complex culture just by learning our language and being immersed in it.
Kolomogrov complexity is not immediately relevant to this point. No matter how large the evolutionary landscape is, there are a small number of stable attractors in that landscape that become ‘universals’, species, parallel evolution, etc etc.
We are not going to create AI’s by randomly sampling mindspace. The only way they could be truly alien is if we evolved a new simulated world from scratch with it’s own evolutionary history and de novo culture and language. But of course that is unrealistic and unuseful on so many levels.
They will necessarily be samples from our mindspace—otherwise they wouldn’t be so useful.
Computers so far have been very different from us. That is partly because they have been built to compensate for our weaknesses—to be strong where we are weak. They compensate for our poor memories, our terrible arithmetic module, our poor long-distance communications skills—and our poor ability at serial tasks. That is how they have managed to find a foothold in society—before maastering nanotechnology.
IMO, we will probably be seeing a considerable amount more of that sort of thing.
Agree with your point, but so far computers have been extensions of our minds and not minds in their own right. And perhaps that trend will continue long enough to delay AGI for a while.
For for AGI, for them to be minds, they will need to think and understand human language—and this is why I say they “will necessarily be samples from our mindspace”.