1) Once you became a true rationalist and overcome your biases, what you are left with is batshit crazy paranoid delusions
or
2) If we build an artificial intelligence as smart as billions of really smart people, running a hundred trillion times faster than we do (so 10^23 x human-equivalence), give it an unimaginably vast virtual universe to develop in, then don’t pay any attention to what it’s up to, we could be in danger because a sci-fi metaphor on a web site said so
or
3) We must institute an intelligence-amplification eugenics program so that we will be capable of crushing our creators should the opportunity arise
I’m guessing (2). So, um, let’s not then. Or maybe this is supposed to happen by accident somehow? Now that I have Windows Vista maybe my computer is 10^3 human-equivalents and so in 20 years a pc will be 10^10 human equivalents and the internet will let our pc’s conspire to kill us? Of course, even our largest computers cannot perform the very first layers of input data sorting tasks one person does effortlessly, but that’s only my biases talking I suppose.
You’re right, it is (2)! If we build an artificial intelligence that smart, with such absurd resources, then we _will_ be in danger. Doing this thing implies we lose.
However, that does not mean that not doing this thing implies we do not lose. A ⇒ B doesn’t mean ¬A ⇒ ¬B. Just because simulating trillions of humans then giving them internet access would be dangerous, that doesn’t mean that’s the only dangerous thing in the universe; that would be absurd. By that logic, we’re immune from nuclear weapons or nanotech just because we don’t have enough computronium to simulate the solar system.
Your conclusion simply doesn’t follow. (Plus, the premise of the argument’s totally a strawman, but there’s no point killing a dead argument deader.)
Hmm, the lesson escapes me a bit. Is it
1) Once you became a true rationalist and overcome your biases, what you are left with is batshit crazy paranoid delusions
or
2) If we build an artificial intelligence as smart as billions of really smart people, running a hundred trillion times faster than we do (so 10^23 x human-equivalence), give it an unimaginably vast virtual universe to develop in, then don’t pay any attention to what it’s up to, we could be in danger because a sci-fi metaphor on a web site said so
or
3) We must institute an intelligence-amplification eugenics program so that we will be capable of crushing our creators should the opportunity arise
I’m guessing (2). So, um, let’s not then. Or maybe this is supposed to happen by accident somehow? Now that I have Windows Vista maybe my computer is 10^3 human-equivalents and so in 20 years a pc will be 10^10 human equivalents and the internet will let our pc’s conspire to kill us? Of course, even our largest computers cannot perform the very first layers of input data sorting tasks one person does effortlessly, but that’s only my biases talking I suppose.
You’re right, it is (2)! If we build an artificial intelligence that smart, with such absurd resources, then we _will_ be in danger. Doing this thing implies we lose.
However, that does not mean that not doing this thing implies we do not lose. A ⇒ B doesn’t mean ¬A ⇒ ¬B. Just because simulating trillions of humans then giving them internet access would be dangerous, that doesn’t mean that’s the only dangerous thing in the universe; that would be absurd. By that logic, we’re immune from nuclear weapons or nanotech just because we don’t have enough computronium to simulate the solar system.
Your conclusion simply doesn’t follow. (Plus, the premise of the argument’s totally a strawman, but there’s no point killing a dead argument deader.)