Many practice and endorse ethical heuristics against the censure of speech on any topic, especially any salient and politically relevant topic, lest such censure mess with our love of truth, or our ability to locate good policy options via the free and full exchange of ideas, or our freedom/autonomy/self-respect broadly.
I don’t think this is actually true.
Even among rationalists I believe there are red lines for ideas that cannot be raised without censure and disgust. I won’t attempt to draw them. The fact that among rationalists these lines lie other than where many people would draw them, including on the topic of racial difference, is not accepted as evidence of a commitment to open-mindedness that overrides other ethical commitments but just as a lack of commitment to those specific principles, with the commitment to open-mindedness as thin cover. Tetlock’s ideas around sacred values, which can’t be easily traded off, may be useful here. It’s not that those willing to discuss racial differences don’t have sacred values, it’s just that non-racism isn’t one of them.
Regarding the clash between the prudence heuristic of “don’t do something that has a 10% chance of killing all people” and other heuristics such as “don’t impede progress,” we have to consider the credibility problem in the assertion of risk by experts, when many of the same experts continue to work on A(G)I (and are making fortunes doing so). The statements about the risk say one thing but the actions say another, so we can’t conclude that anyone is actually trading off, in a real sense, against the prudence heuristic. This relates to my previous comment: “don’t kill all humans” seems like a sacred value, and so statements suggesting one is making the trade-off are not credible. From this “revealed belief” perspective, a statement “I believe there is a 10% chance that AI will kill all people” by an AI expert still working toward AGI is a false statement, and the only way to increase belief in the prediction is for AI experts to stop working on AI (at which point stopping the suicidal hold-outs becomes much easier). Conversely, amplifying the predictions of risk by leaders in the AI industry is a great way to confound the advocacy of the conscientious objectors.
I don’t think this is actually true.
Even among rationalists I believe there are red lines for ideas that cannot be raised without censure and disgust. I won’t attempt to draw them. The fact that among rationalists these lines lie other than where many people would draw them, including on the topic of racial difference, is not accepted as evidence of a commitment to open-mindedness that overrides other ethical commitments but just as a lack of commitment to those specific principles, with the commitment to open-mindedness as thin cover. Tetlock’s ideas around sacred values, which can’t be easily traded off, may be useful here. It’s not that those willing to discuss racial differences don’t have sacred values, it’s just that non-racism isn’t one of them.
Regarding the clash between the prudence heuristic of “don’t do something that has a 10% chance of killing all people” and other heuristics such as “don’t impede progress,” we have to consider the credibility problem in the assertion of risk by experts, when many of the same experts continue to work on A(G)I (and are making fortunes doing so). The statements about the risk say one thing but the actions say another, so we can’t conclude that anyone is actually trading off, in a real sense, against the prudence heuristic. This relates to my previous comment: “don’t kill all humans” seems like a sacred value, and so statements suggesting one is making the trade-off are not credible. From this “revealed belief” perspective, a statement “I believe there is a 10% chance that AI will kill all people” by an AI expert still working toward AGI is a false statement, and the only way to increase belief in the prediction is for AI experts to stop working on AI (at which point stopping the suicidal hold-outs becomes much easier). Conversely, amplifying the predictions of risk by leaders in the AI industry is a great way to confound the advocacy of the conscientious objectors.