But there are two reasons: humans can’t self-improve very well; … Second, humans have a great deal of evolution to make our instincts guide us toward cooperation.
In general, my intuition about “comparing to humans” is the following:
the abilities that humans have, can be replicated
the limitations that humans have, may be irrelevant on a different architecture
Which probably sounds unfair, like I am arbitrarily and inconsistently choosing “it will/won’t be like humans” depending on what benefits the doomer side at given parts of the argument. Yes, it will be like humans, where the humans are strong (can think, can do things in real world, communicate). No, it won’t be like humans, where the humans are weak (mortal, get tired or distracted, not aligned with each other, bad at multitasking).
It probably doesn’t help that most people start with the opposite intuition:
humans are special; consciousness / thinking / creativity is mysterious and cannot be replicated
human limitations are the laws of nature (many of them also apply to the ancient Greek gods)
So, not only do I contradict the usual intuition, but I also do it inconsistently: “Creating a machine like a human is possible, except it won’t really be like a human.” I shouldn’t have it both ways at the same time!
To steelman the criticism:
every architecture comes with certain trade-offs; they may be different, but not non-existent
the practical problems of AI building a new technology shouldn’t be completely ignored; the sci-fi factories may require so much action in real world that the AI could only build them after conquering the world (so they cannot be used as an explanation for how the AI will conquer the world)
I don’t have a short and convincing answer here, it just seems to me that even relatively small changes to humans themselves might produce something dramatically stronger. (But maybe I underestimate the complexity of such changes.) Imagine a human with IQ 200 who can think 100 times faster, never gets tired or distracted; imagine hundred such humans, perfectly loyal to their leader, willing to die for the cause… if currently dictators can take over countries (which probably also involves a lot of luck), such group should be able to do it, too (but more reliably). A great advantage over a human wannabe dictator would be their capacity to multi-task; they could try infiltrating and taking over all powerful groups at the same time.
(I am not saying that this is how AI will literally do it. I am saying that things hypothetically much stronger than humans—including intellectually—are quite easy to imagine. Just like a human with a sword can overpower five humans, and a human with a machine gun can overpower hundred humans, the AI may be able to overpower billions of humans without hitting the limits given by the laws of physics. Perhaps even if the humans have already taken precautions based on the previous 99 AIs that started their attack prematurely.)
In general, my intuition about “comparing to humans” is the following:
the abilities that humans have, can be replicated
the limitations that humans have, may be irrelevant on a different architecture
Which probably sounds unfair, like I am arbitrarily and inconsistently choosing “it will/won’t be like humans” depending on what benefits the doomer side at given parts of the argument. Yes, it will be like humans, where the humans are strong (can think, can do things in real world, communicate). No, it won’t be like humans, where the humans are weak (mortal, get tired or distracted, not aligned with each other, bad at multitasking).
It probably doesn’t help that most people start with the opposite intuition:
humans are special; consciousness / thinking / creativity is mysterious and cannot be replicated
human limitations are the laws of nature (many of them also apply to the ancient Greek gods)
So, not only do I contradict the usual intuition, but I also do it inconsistently: “Creating a machine like a human is possible, except it won’t really be like a human.” I shouldn’t have it both ways at the same time!
To steelman the criticism:
every architecture comes with certain trade-offs; they may be different, but not non-existent
some limitations are laws of nature, e.g. Landauer’s principle
the practical problems of AI building a new technology shouldn’t be completely ignored; the sci-fi factories may require so much action in real world that the AI could only build them after conquering the world (so they cannot be used as an explanation for how the AI will conquer the world)
I don’t have a short and convincing answer here, it just seems to me that even relatively small changes to humans themselves might produce something dramatically stronger. (But maybe I underestimate the complexity of such changes.) Imagine a human with IQ 200 who can think 100 times faster, never gets tired or distracted; imagine hundred such humans, perfectly loyal to their leader, willing to die for the cause… if currently dictators can take over countries (which probably also involves a lot of luck), such group should be able to do it, too (but more reliably). A great advantage over a human wannabe dictator would be their capacity to multi-task; they could try infiltrating and taking over all powerful groups at the same time.
(I am not saying that this is how AI will literally do it. I am saying that things hypothetically much stronger than humans—including intellectually—are quite easy to imagine. Just like a human with a sword can overpower five humans, and a human with a machine gun can overpower hundred humans, the AI may be able to overpower billions of humans without hitting the limits given by the laws of physics. Perhaps even if the humans have already taken precautions based on the previous 99 AIs that started their attack prematurely.)