A massive singular super entity as sometimes implied on this site I find not only to be improbable, but to actually be a physically impossible idea (at least not until you get to black hole computer level of technology).
A skynet type intelligence is a fiction anyway, and I think if you really look at the limits of intelligence and AGI, a bunch of accelerated high IQ human-ish brains are much closer to those limits than most here would give creedence to.
On the one hand you have extremely limited AI that can’t communicate with each other. They would be extremely redundant and wast alot of resources because each will have to do the exact same process and discover the exact same things on their own.
On the other hand you have a massive singular AI individual made up of thousands of computing systems, each of which is devoted to storing seperate information and doing a seperate task. Basically it’s a human like brain distributed over all available resources. This will enivitably fail as well; operations done on one side of the system could be light years away (we don’t know how big the AI will get or what the constrains of it’s situation will be, but AGI has to adapt to every possible situation) from where the data is needed.
The best is a combination of the two, as much communication through the network as possible, but specializing areas of resources for different purposes. This could lead to skynet like intelligences, or it could lead to a very individualistic AI society where the AI isn’t a single entity but a massive variety of individuals in different states working together. It probably wouldn’t be much like human civilization though. Human society evolved to fit a variety of restrictions that aren’t present in AI. That means it could adapt a very different structure, stuff like morals (as we know them anyways) may not be necessary.
On the one hand you have extremely limited AI that can’t communicate with each other. They would be extremely redundant and wast alot of resources because each will have to do the exact same process and discover the exact same things on their own.
On the other hand you have a massive singular AI individual made up of thousands of computing systems, each of which is devoted to storing seperate information and doing a seperate task. Basically it’s a human like brain distributed over all available resources. This will enivitably fail as well; operations done on one side of the system could be light years away (we don’t know how big the AI will get or what the constrains of it’s situation will be, but AGI has to adapt to every possible situation) from where the data is needed.
The best is a combination of the two, as much communication through the network as possible, but specializing areas of resources for different purposes. This could lead to skynet like intelligences, or it could lead to a very individualistic AI society where the AI isn’t a single entity but a massive variety of individuals in different states working together. It probably wouldn’t be much like human civilization though. Human society evolved to fit a variety of restrictions that aren’t present in AI. That means it could adapt a very different structure, stuff like morals (as we know them anyways) may not be necessary.