I’m not sure whether AGI won’t come until AI is social—i.e. the mistake is to think of it intelligence as a property of an individual machine, whereas it’s more a a property of a transducer (that’s of a sufficient level of complexity) embedded in a network. That is so even when it’s working relatively independently of the network.
IOW, the tools and materials of intelligence are a social product, even if an individual transducer that’s relatively independent works with them in an individual act of intelligence. When I say “product” I mean that the meaning itself is distributed in amongst the network and doesn’t reside in any individual.
No AGI until social AI.
Fascinating topic, and a topic that’s going to loom larger as we progress. I’ve just registered in order to join in with this discussion (and hopefully many more at this wonderful site). Hi everybody! :)
Surely an intelligent entity will understand the necessity for genetic/memetic variety in the face of the unforeseen? That’s absolutely basic. The long-term, universal goal is always power (to realize whatever); power requires comprehensive understanding; comprehensive understanding requires sufficient “generate” for some “tests” to hit the mark.
The question then, I guess is, can we sort of “drift” into being a mindless monoculture of replicators?
Articles like this, or like s-f in general, or even just thought experiments in general (again, on the “generate” side of the universal process) shows that we are unlikely to, since this is already a warning of potential dangers.