How many people on LessWrong realize that when you tell someone their AI project is dangerously stupid, or that their favorite charity is a waste of money, you risk losing them forever—and not because of anything to do with the the subtler human biases, but just becasue most people hate being told they’re wrong?
Well, the problem is, these two specific examples simply are not true. Many charities are reasonably effective in their stated purpose, even if “effective altruism” believers would hold that they are strictly suboptimal in terms of human good. Likewise, let’s be frank, almost all AGI projects are complete fucking bunk and never go anywhere. The ones that stand a chance of actually working, even slightly, tend to be run by people intelligent enough to actually notice the dangers when you point them out (even if they don’t believe you can do anything about the dangers).
If you want to win an argument, be prepared to be persuaded of things yourself, to take into account evidence you haven’t heard yet—including evidence for null hypotheses, which are, after all, usually true.
Well, the problem is, these two specific examples simply are not true. Many charities are reasonably effective in their stated purpose, even if “effective altruism” believers would hold that they are strictly suboptimal in terms of human good. Likewise, let’s be frank, almost all AGI projects are complete fucking bunk and never go anywhere. The ones that stand a chance of actually working, even slightly, tend to be run by people intelligent enough to actually notice the dangers when you point them out (even if they don’t believe you can do anything about the dangers).
If you want to win an argument, be prepared to be persuaded of things yourself, to take into account evidence you haven’t heard yet—including evidence for null hypotheses, which are, after all, usually true.