“The AI’s trajectory of self-modification has to come from somewhere.”
The existence has to come from somewhere, therefore god; the life has to come from somewhere therefore creationism, and so on. “Has to come from somewhere” typically begins an invalid argument where the “somewhere” is substituted for with the belief being argued for.
In this particular case, an algorithm that can take and optimize algorithms (turning computationally intractable ones tractable), can take it’s code as the input, and produce something superior, without modelling the world, maximizing any real world goal, dealing with uncertainty, reflecting upon it’s own implementation, or any of that baggage which a: takes time to develop and b: slows the resulting software down. The self improvement does not have to arise from any real world desire at all, and for any notable self improvement (significant comparable to the work done by humans) to arise from such desire takes a superintelligence.
The seed isn’t the superintelligence.
Something that is not in some way super-humanly intelligent would not be able to massively and uncontrollably participate in the technological progress of it’s own improvement, as the AI can also be improved by human work and a sub-human general intelligence can, of course, be improved faster by humans (using specialized software tools) than by it’s own inferior general intelligence.
An AI at or below human intelligence can still work much faster than a human, because it thinks faster. (As a rule, AIs are not made of biological neurons.) An algorithm with a lot of computational power can (say, evolutionarily) brute-force solutions to general-intelligence hurdles human engineers have run into. These sorts of scenarios can be more dangerous, since an intelligence explosion that starts stupider than we expect will probably be able to be invented sooner, and since evolutionary algorithms tend to produce complex and counter-intuitive results that humans have a harder time understanding.
You’re still positing it to be in some way superhuman to an extreme extent. Your “brute force” (trial-and-error would be a better general description) is still achieving superhuman feats from the very start. Evolutionary approaches are incredibly ineffective inventors. At that point you got to have hardware mindbogglingly superior to even the largest estimates of the computing power of human brain. Which humans can use in far more directed ways. Hardware preceded by less powerful hardware, which solves other problems.
But okay. Note how evolution produced a being (H. Sapiens) which has no particular intention to follow the original ‘target’ of the evolutionary optimization process, and instead tends to pick itself goals that are less stupid, while cleverly subverting the evolutionary ‘solutions’. You don’t get something that cares to follow your programming, apart from the remote possibility that it reads the original code and thinks about the purpose. Yeah, the results are not guaranteed to be favourable, far from that, but it’s not because it would follow the goals you originally coded into the evolutionary algorithm. And you can’t just fix that by coding friendlier goals into an evolutionary algorithm.
This is precisely like arguing with the promoters of any other ridiculous religion. You challenge one of their beliefs, and they come up with all sorts of rationalizations, which are severely at odds with much else that they believe in or just argued for minutes earlier (in the real life) or in the post above (online).
“The AI’s trajectory of self-modification has to come from somewhere.”
The existence has to come from somewhere, therefore god; the life has to come from somewhere therefore creationism, and so on. “Has to come from somewhere” typically begins an invalid argument where the “somewhere” is substituted for with the belief being argued for.
In this particular case, an algorithm that can take and optimize algorithms (turning computationally intractable ones tractable), can take it’s code as the input, and produce something superior, without modelling the world, maximizing any real world goal, dealing with uncertainty, reflecting upon it’s own implementation, or any of that baggage which a: takes time to develop and b: slows the resulting software down. The self improvement does not have to arise from any real world desire at all, and for any notable self improvement (significant comparable to the work done by humans) to arise from such desire takes a superintelligence.
Something that is not in some way super-humanly intelligent would not be able to massively and uncontrollably participate in the technological progress of it’s own improvement, as the AI can also be improved by human work and a sub-human general intelligence can, of course, be improved faster by humans (using specialized software tools) than by it’s own inferior general intelligence.
An AI at or below human intelligence can still work much faster than a human, because it thinks faster. (As a rule, AIs are not made of biological neurons.) An algorithm with a lot of computational power can (say, evolutionarily) brute-force solutions to general-intelligence hurdles human engineers have run into. These sorts of scenarios can be more dangerous, since an intelligence explosion that starts stupider than we expect will probably be able to be invented sooner, and since evolutionary algorithms tend to produce complex and counter-intuitive results that humans have a harder time understanding.
You’re still positing it to be in some way superhuman to an extreme extent. Your “brute force” (trial-and-error would be a better general description) is still achieving superhuman feats from the very start. Evolutionary approaches are incredibly ineffective inventors. At that point you got to have hardware mindbogglingly superior to even the largest estimates of the computing power of human brain. Which humans can use in far more directed ways. Hardware preceded by less powerful hardware, which solves other problems.
But okay. Note how evolution produced a being (H. Sapiens) which has no particular intention to follow the original ‘target’ of the evolutionary optimization process, and instead tends to pick itself goals that are less stupid, while cleverly subverting the evolutionary ‘solutions’. You don’t get something that cares to follow your programming, apart from the remote possibility that it reads the original code and thinks about the purpose. Yeah, the results are not guaranteed to be favourable, far from that, but it’s not because it would follow the goals you originally coded into the evolutionary algorithm. And you can’t just fix that by coding friendlier goals into an evolutionary algorithm.
This is precisely like arguing with the promoters of any other ridiculous religion. You challenge one of their beliefs, and they come up with all sorts of rationalizations, which are severely at odds with much else that they believe in or just argued for minutes earlier (in the real life) or in the post above (online).