For what it’s worth, I’ve never found it difficult to get people to understand the concept of UFAI. I have two possible explanations. One is that I’m kidding myself about how well they understand it, but the one I find more plausible is that I’m framing it as an interesting idea, and Eliezer frames it as something people ought to do something about.
I’ve also explained the idea to many people. It’s quite easy to explain in a single conversation, but difficult to explain in such a way that they can re-explain it to someone else and answer possible questions. When I explain it to someone really smart, they usually start asking many questions, which I can quickly answer only because I have read most of LW and many related texts.
Likewise. Most people get it, at least on some level. It’s a lot more difficult to convince them of: 1) the timeline and 2) that they ought to do something about it.
Not exactly—science fiction is usually set up so that humanity has a chance and that chance leads to victory, and part of Eliezer’s point is that victory is not at all guaranteed.
For a true UFAI, victory is guaranteed. For the AI. Otherwise it’s not actually “intelligent”, merely “somewhat smarter than a rock”—also an adequate description of humans.
No it is not. An AI does not have to be all-powerful. It might get beaten by a coordinated effort of humanity, or by another AI, or by an unlikely accident.
It’s theoretically possible to have an isolated AI that has no ability to affect the physical world, but that’s rather boring. Hollywood AI has enough power to produce dramatic tension, but is still weak enough to be defeated by plucky humans. In reality, pnce a UFAI gets to post-Judgement Day Skynet-level power, it’s over. Even putting aside the fact that it has freaking time machine, the idea of fighting a UFAI that has wiped out most of humanity in a nuclear holocaust and controls self-replicating, nigh indestructible robots is absurd.
For what it’s worth, I’ve never found it difficult to get people to understand the concept of UFAI. I have two possible explanations. One is that I’m kidding myself about how well they understand it, but the one I find more plausible is that I’m framing it as an interesting idea, and Eliezer frames it as something people ought to do something about.
Or the people you talk to about UFAI are pre-selected to be more likely to understand it.
I’ve also explained the idea to many people. It’s quite easy to explain in a single conversation, but difficult to explain in such a way that they can re-explain it to someone else and answer possible questions. When I explain it to someone really smart, they usually start asking many questions, which I can quickly answer only because I have read most of LW and many related texts.
Likewise. Most people get it, at least on some level. It’s a lot more difficult to convince them of: 1) the timeline and 2) that they ought to do something about it.
Understand is very different from consider seriously.
It’s the staple of sci-fi.
Not exactly—science fiction is usually set up so that humanity has a chance and that chance leads to victory, and part of Eliezer’s point is that victory is not at all guaranteed.
For a true UFAI, victory is guaranteed. For the AI. Otherwise it’s not actually “intelligent”, merely “somewhat smarter than a rock”—also an adequate description of humans.
No it is not. An AI does not have to be all-powerful. It might get beaten by a coordinated effort of humanity, or by another AI, or by an unlikely accident.
It’s theoretically possible to have an isolated AI that has no ability to affect the physical world, but that’s rather boring. Hollywood AI has enough power to produce dramatic tension, but is still weak enough to be defeated by plucky humans. In reality, pnce a UFAI gets to post-Judgement Day Skynet-level power, it’s over. Even putting aside the fact that it has freaking time machine, the idea of fighting a UFAI that has wiped out most of humanity in a nuclear holocaust and controls self-replicating, nigh indestructible robots is absurd.