Rather he is presenting an argument that one should have a very strong prior against the ideas presented in Superintelligence, which is to say they require a truly large amount of evidence to believe them to such an extent as to uproot yourself and alter your life’s purpose, as many are doing.
Okay, suppose one should start off with a small prior probability on AI risk. What matters is the strength of the update; do we actually have a truly large amount of evidence in favor of risk?
I propose the answer is obvious: Yes.
Okay, maybe you’re just tuning in, and haven’t read all of Superintelligence and haven’t read all of Less Wrong. Maybe you’re still living in 2013, when it isn’t obvious that all the important people think that AI alignment is a real issue worth putting serious effort into. Maybe you can’t evaluate arguments on their merits, and so all you have to go on is the surface features of arguments.
Then you probably shouldn’t have an opinion, one way or the other. Turns out, having an ability to evaluate arguments is critically important for coming to correct conclusions.
But suppose you still want to. Okay, fine: this article is a collection of arguments that don’t consider counterarguments, and don’t even pretend to consider the counterarguments. One of Yudkowsky’s recent Facebook posts seems relevant. Basically, any critique written where the author doesn’t expect to lose points if they fail to respond well to counter-critique is probably a bad critique.
Does this talk look like the talk the speaker would give, if Bostrom were in the audience, and had an hour to prepare a response, and then could give that response?
Compare to Superintelligence, Less Wrong, and the general conversation about AI alignment, where the ‘alarmists’ (what nice, neutral phrasing from idlewords!) put tremendous effort into explaining what they’re worried about, and why counterarguments fail.
His actual professed opinion on AI risk, given at the end, is rather agnostic, and that seems to be what he is arguing for: a healthy dose of agnostic skepticism.
Notice that “agnostic,” while it might sound like a position that’s more easy to justify than others, really isn’t. See Pretending to be Wise, and the observation that ‘neutrality’ is a position as firm as any other, when it comes to policy outcomes.
Suppose that you actually didn’t know, one way or the other. You know about a risk, and maybe it’s legitimate, maybe it’s not.
Note the nitrogen ignition example at the start is presented as a “legitimate” risk, but this is a statement about human ignorance; there was a time when we didn’t know some facts about math, and now we know those facts about math. (That calculation involved no new experiments, just generating predictions that hadn’t been generated before.)
So you’re curious. Maybe the arguments in Superintelligence go through; maybe they don’t. Then you might take the issue a little more seriously than ‘agnosticism’, in much the same way that one doesn’t describe themselves as “agnostic” about where the bullet is during a game of Russian Roulette. If you thought the actual future were at stake, you might use styles of argumentation designed to actually reach the truth, so that you could proceed or halt accordingly. The Los Alamos physicists didn’t just mock the idea of burning up the atmosphere; they ran the numbers because all life was at stake.
But what is it instead? It says right at the beginning:
The computer that takes over the world is a staple scifi trope. But enough people take this scenario seriously that we have to take them seriously.
Or, to state it equivalently:
Science fiction has literally never predicted any change, and so if a predicted change looks like science fiction, it physically cannot happen. Other people cannot generate correct, non-obvious arguments, only serve as obstacles to people sharing my opinions.
Perhaps the second version looks less convincing than the first version. If so, I think this is because you’re not able to spin or de-spin things effectively enough; the first sentence was classic Bulverism (attacking the suspected generator of a thought instead of the thought’s actual content) and replacing it with the actual content makes it ludicrous. The second is an implicit dismissal of the veracity of the arguments, replaced with and explicit dismissal (generalized to all arguments; if they were going to single out what made this not worth taking seriously, then they would go after it on the merits).
The idea that the kind of AI that this community is worried about is not the scenario that is common in Scifi. A real AGI wouldn’t act like the one’s in Scifi.
The idea that the kind of AI that this community is worried about is not the scenario that is common in Scifi. A real AGI wouldn’t act like the one’s in Scifi.
I get where you’re going with this, but I think it’s either not true or not relevant. That is, it looks like a statement about the statistical properties of scifi (most AI in fiction is unrealistic) which might be false if you condition appropriately (there have been a bunch of accurate presentations of AI recently, and so it’s not clear this still holds for contemporary scifi). What I care about though is the question of whether or not that matters.
Suppose the line of argument is something like “scifi is often unrealistic,” “predicting based on unrealistic premises is bad,” and “this is like scifi because it’s unrealistic.” This is a weaker argument than one that just has the second piece and the third piece modified to say “this is unrealistic.” (And for this to work, we need to focus on the details of the argument.)
Suppose instead the line of argument is instead something like “scifi is often unrealistic,” “predicting based on unrealistic premises is bad,” and “this is like scifi because of its subject matter.” Obviously this leaves a hole—the subject matter may be something that many people get wrong, but does this presentation get it wrong?
Okay, suppose one should start off with a small prior probability on AI risk. What matters is the strength of the update; do we actually have a truly large amount of evidence in favor of risk?
I propose the answer is obvious: Yes.
Okay, maybe you’re just tuning in, and haven’t read all of Superintelligence and haven’t read all of Less Wrong. Maybe you’re still living in 2013, when it isn’t obvious that all the important people think that AI alignment is a real issue worth putting serious effort into. Maybe you can’t evaluate arguments on their merits, and so all you have to go on is the surface features of arguments.
Then you probably shouldn’t have an opinion, one way or the other. Turns out, having an ability to evaluate arguments is critically important for coming to correct conclusions.
But suppose you still want to. Okay, fine: this article is a collection of arguments that don’t consider counterarguments, and don’t even pretend to consider the counterarguments. One of Yudkowsky’s recent Facebook posts seems relevant. Basically, any critique written where the author doesn’t expect to lose points if they fail to respond well to counter-critique is probably a bad critique.
Does this talk look like the talk the speaker would give, if Bostrom were in the audience, and had an hour to prepare a response, and then could give that response?
Compare to Superintelligence, Less Wrong, and the general conversation about AI alignment, where the ‘alarmists’ (what nice, neutral phrasing from idlewords!) put tremendous effort into explaining what they’re worried about, and why counterarguments fail.
Notice that “agnostic,” while it might sound like a position that’s more easy to justify than others, really isn’t. See Pretending to be Wise, and the observation that ‘neutrality’ is a position as firm as any other, when it comes to policy outcomes.
Suppose that you actually didn’t know, one way or the other. You know about a risk, and maybe it’s legitimate, maybe it’s not.
Note the nitrogen ignition example at the start is presented as a “legitimate” risk, but this is a statement about human ignorance; there was a time when we didn’t know some facts about math, and now we know those facts about math. (That calculation involved no new experiments, just generating predictions that hadn’t been generated before.)
So you’re curious. Maybe the arguments in Superintelligence go through; maybe they don’t. Then you might take the issue a little more seriously than ‘agnosticism’, in much the same way that one doesn’t describe themselves as “agnostic” about where the bullet is during a game of Russian Roulette. If you thought the actual future were at stake, you might use styles of argumentation designed to actually reach the truth, so that you could proceed or halt accordingly. The Los Alamos physicists didn’t just mock the idea of burning up the atmosphere; they ran the numbers because all life was at stake.
But what is it instead? It says right at the beginning:
Or, to state it equivalently:
Perhaps the second version looks less convincing than the first version. If so, I think this is because you’re not able to spin or de-spin things effectively enough; the first sentence was classic Bulverism (attacking the suspected generator of a thought instead of the thought’s actual content) and replacing it with the actual content makes it ludicrous. The second is an implicit dismissal of the veracity of the arguments, replaced with and explicit dismissal (generalized to all arguments; if they were going to single out what made this not worth taking seriously, then they would go after it on the merits).
The idea that the kind of AI that this community is worried about is not the scenario that is common in Scifi. A real AGI wouldn’t act like the one’s in Scifi.
I get where you’re going with this, but I think it’s either not true or not relevant. That is, it looks like a statement about the statistical properties of scifi (most AI in fiction is unrealistic) which might be false if you condition appropriately (there have been a bunch of accurate presentations of AI recently, and so it’s not clear this still holds for contemporary scifi). What I care about though is the question of whether or not that matters.
Suppose the line of argument is something like “scifi is often unrealistic,” “predicting based on unrealistic premises is bad,” and “this is like scifi because it’s unrealistic.” This is a weaker argument than one that just has the second piece and the third piece modified to say “this is unrealistic.” (And for this to work, we need to focus on the details of the argument.)
Suppose instead the line of argument is instead something like “scifi is often unrealistic,” “predicting based on unrealistic premises is bad,” and “this is like scifi because of its subject matter.” Obviously this leaves a hole—the subject matter may be something that many people get wrong, but does this presentation get it wrong?