I really appreciate your clear-headedness at recognizing these phenomena even in people “on the same team”, i.e. people very concerned about and interested in preventing AI X-Risk.
However, I suspect that you also underrate the amount of self-deception going on here. It’s much easier to convince others if you convince yourself first. I think people in the AI Safety community self-deceive in various ways, for example by choosing to not fully think through how their beliefs are justified (e.g. not acknowledging the extent to which they are based on deference—Tsvi writes about this in his recent post rather well).
There are of course people who explicitly, consciously, plan to deceive, thinking things like “it’s very important to convince people that AI Safety/policy X is important, and so we should use the most effective messaging techniques possible, even if they use false or misleading claims.” However, I think there’s a larger set of people who, as they realize claims A B C are useful for consequentialist reasons, internally start questioning A B C less, and become biased to believe A B C themselves.
Sure! I definitely agree that’s going on a lot as well. But I think that kind of deception is more common in the rest of the world, and the things that set this community apart from others is the ability to do something more intentional here (which then combined with plenty of self-deception can result in quite catastrophic outcomes, as FTX illustrates).
I really appreciate your clear-headedness at recognizing these phenomena even in people “on the same team”, i.e. people very concerned about and interested in preventing AI X-Risk.
However, I suspect that you also underrate the amount of self-deception going on here. It’s much easier to convince others if you convince yourself first. I think people in the AI Safety community self-deceive in various ways, for example by choosing to not fully think through how their beliefs are justified (e.g. not acknowledging the extent to which they are based on deference—Tsvi writes about this in his recent post rather well).
There are of course people who explicitly, consciously, plan to deceive, thinking things like “it’s very important to convince people that AI Safety/policy X is important, and so we should use the most effective messaging techniques possible, even if they use false or misleading claims.” However, I think there’s a larger set of people who, as they realize claims A B C are useful for consequentialist reasons, internally start questioning A B C less, and become biased to believe A B C themselves.
Sure! I definitely agree that’s going on a lot as well. But I think that kind of deception is more common in the rest of the world, and the things that set this community apart from others is the ability to do something more intentional here (which then combined with plenty of self-deception can result in quite catastrophic outcomes, as FTX illustrates).