We lose the argument. We confidently state our confident positions, with no shyness or sugarcoating. It grabs a lot of attention, but it’s negative attention and ridicule. We get a ton of engagement, by those who consider us a low status morbid curiosity. We make a lot of powerful enemies, who relentlessly attack AI concern and convince almost everyone.
We don’t lose any argument, but the argument never happens. We have a ton of impressive endorsements, and anyone who actually reads about the drama learn that our side consists of high status scientists and geniuses. We have no enemies—the only people rarer than someone arguing for AI risk is someone arguing against AI risk. And yet… we are ignored. Politicians are simply too busy to think about this. They may think “I guess your logic is correct… but no other politicians seem to be invested in this, and I don’t really want to be the first one.”
Being bolder increases the “losing argument” risk but decreases the “argument never happens” risk. And this is exactly what we want at this point in time. (As long as you don’t do negative actions like traffic obstruction protests.)
PS: I also think there are two kinds of burden of proof:
Rational burden of proof. The debater who argues we are 100% safe has the burden of proof, while the debater arguing that “building a more intelligent species doesn’t seem very safe” has no burden of proof.
Psychological burden of proof. The debater arguing the position that “everyone seems to agree with” has no burden of proof, while the debater arguing the radical extreme position has the burden of proof.
How the heck do we decide which position is the “radical extreme position?” It depends on many things, e.g. how many expert endorsements support AI concern, and how many expert endorsements (e.g. Yann LeCun) reject it. But clearly, the balance seems to be in favour of AI concern yet it’s still AI concern which suffers from the psychological burden of proof.
So maybe the problem is not expert endorsements, but ordinary layman beliefs? Well 55% of Americans surveyed agree that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” Only 12% disagree.
So maybe it really is vibes?! You just have to emphasize that this is a strongly supported position, this is what experts think, and if you think “this is insanity,” you’re out of the loop. You gotta read it up, because this great paradigm shift quietly happened, when you were paying attention to other things.
Given that the psychological burden of proof might work this way, even risk (1), “we lose the argument,” could actually be reduced if we are more confident.
I really like this.
I think AI concern can fail in two ways:
We lose the argument. We confidently state our confident positions, with no shyness or sugarcoating. It grabs a lot of attention, but it’s negative attention and ridicule. We get a ton of engagement, by those who consider us a low status morbid curiosity. We make a lot of powerful enemies, who relentlessly attack AI concern and convince almost everyone.
We don’t lose any argument, but the argument never happens. We have a ton of impressive endorsements, and anyone who actually reads about the drama learn that our side consists of high status scientists and geniuses. We have no enemies—the only people rarer than someone arguing for AI risk is someone arguing against AI risk. And yet… we are ignored. Politicians are simply too busy to think about this. They may think “I guess your logic is correct… but no other politicians seem to be invested in this, and I don’t really want to be the first one.”
Being bolder increases the “losing argument” risk but decreases the “argument never happens” risk. And this is exactly what we want at this point in time. (As long as you don’t do negative actions like traffic obstruction protests.)
PS: I also think there are two kinds of burden of proof:
Rational burden of proof. The debater who argues we are 100% safe has the burden of proof, while the debater arguing that “building a more intelligent species doesn’t seem very safe” has no burden of proof.
Psychological burden of proof. The debater arguing the position that “everyone seems to agree with” has no burden of proof, while the debater arguing the radical extreme position has the burden of proof.
How the heck do we decide which position is the “radical extreme position?” It depends on many things, e.g. how many expert endorsements support AI concern, and how many expert endorsements (e.g. Yann LeCun) reject it. But clearly, the balance seems to be in favour of AI concern yet it’s still AI concern which suffers from the psychological burden of proof.
So maybe the problem is not expert endorsements, but ordinary layman beliefs? Well 55% of Americans surveyed agree that “mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” Only 12% disagree.
So maybe it really is vibes?! You just have to emphasize that this is a strongly supported position, this is what experts think, and if you think “this is insanity,” you’re out of the loop. You gotta read it up, because this great paradigm shift quietly happened, when you were paying attention to other things.
Given that the psychological burden of proof might work this way, even risk (1), “we lose the argument,” could actually be reduced if we are more confident.