If we travel across the universe, and meet an AI who travelled across the universe before it met us, we can assume there was some kind of “evolutionary” pressure on this AI.
If we build a new AI, not knowing what exactly we are doing (especially if we tried some really bad idea like: “just connect the neurons randomly, give it huge computing power, and see what happens; trust me, the superintelligence will discover the one true morality”), there is no natural selection yet, and the new AI may do pretty much anything.
More precisely, natural selection needs iterations. Living things with much shorter life cycles than humans evolve a whole lot more quickly than humans. Bacteria have evolved strains that resist antibiotics, and we have not had antibiotics for even one-tenth the time they would need to be around to influence the human genome very much.
The point being an AI which spews slightly varied copies of itself far and wide may evolve quite a lot faster than a human. Or, essentially the same thing, an AI which runs simulations of variations of itself to see which have potential in the real world, and emits many and varied copies of those might evolve on afterburners compared to DNA mediated evolution.
Not quite. Counting AIs is much harder than counting people. An AI is neither discrete nor homogenous.
I think that it is most unlikely that the world could be controlled by one uniform, homogenous, intelligence. It would need to be at least physically distributed over multiple computers. It will not be a giant von-Neuman machine doing one thing at a time. There will be lots of subprocesses working somewhat independently. It would seem almost certain that they would eventually fragment to some extent.
People are not that homogenous either. We have competing internal thoughts.
Further, an AI will be composed of many components, and those components will compete with each other. Suppose one part of the AI develops a new and better theorem prover. Pretty soon the rest of the AI will start to use that new component and the old one will die. Over time the AI will consist of the components that are best at promoting themselves.
It will be a complex environment. And there will never be enough hardware to run all the programs that could be written, so there will be competition for resources.
The natural selection needs time.
If we travel across the universe, and meet an AI who travelled across the universe before it met us, we can assume there was some kind of “evolutionary” pressure on this AI.
If we build a new AI, not knowing what exactly we are doing (especially if we tried some really bad idea like: “just connect the neurons randomly, give it huge computing power, and see what happens; trust me, the superintelligence will discover the one true morality”), there is no natural selection yet, and the new AI may do pretty much anything.
More precisely, natural selection needs iterations. Living things with much shorter life cycles than humans evolve a whole lot more quickly than humans. Bacteria have evolved strains that resist antibiotics, and we have not had antibiotics for even one-tenth the time they would need to be around to influence the human genome very much.
The point being an AI which spews slightly varied copies of itself far and wide may evolve quite a lot faster than a human. Or, essentially the same thing, an AI which runs simulations of variations of itself to see which have potential in the real world, and emits many and varied copies of those might evolve on afterburners compared to DNA mediated evolution.
Not quite. Counting AIs is much harder than counting people. An AI is neither discrete nor homogenous.
I think that it is most unlikely that the world could be controlled by one uniform, homogenous, intelligence. It would need to be at least physically distributed over multiple computers. It will not be a giant von-Neuman machine doing one thing at a time. There will be lots of subprocesses working somewhat independently. It would seem almost certain that they would eventually fragment to some extent.
People are not that homogenous either. We have competing internal thoughts.
Further, an AI will be composed of many components, and those components will compete with each other. Suppose one part of the AI develops a new and better theorem prover. Pretty soon the rest of the AI will start to use that new component and the old one will die. Over time the AI will consist of the components that are best at promoting themselves.
It will be a complex environment. And there will never be enough hardware to run all the programs that could be written, so there will be competition for resources.