I think we’re just trying to do different things here… I’m trying to describe empirical clusters of people / orgs, you’re trying to describe positions, maybe? And I’m taking your descriptions as pointers to clusters of people, of the form “the cluster of people who say XYZ”. I think my interpretation is appropriate here because there is so much importance-weighted abject insincerity in publicly stated positions regarding AGI X-risk that it just doesn’t make much sense to focus on the stated positions as positions.
Like, the actual people at The Curve or whatever are less “I will do alignment, and will be against racing, and alas, this may provide some cover” and more “I will do fake alignment with no sense that I should be able to present any plausible connection between my work and making safe AGI, and I will directly support racing”. All the people who actually do the stated thing are generally understood to be irrelevant weirdos. The people who say that are being insincere, and in fact support racing.
I was trying to map out disagreements between people who are concerned enough about AI risk.
Agreed that this represents only a fraction of the people who talk about AI risk, and that there are a lot of people who will use some of these arguments as false justifications for their support of racing.
EDIT: as TsviBT pointed out in his comment, OP is actually about people who self-identify as members of the AI Safety community. Given that, I think that the two splits I mentioned above are still useful models, since most people I end up meeting who self-identify as members of the community seem to be sincere, without stated positions that differ from their actual reasons for why they do things. I have met people who I believe to be insincere, but I don’t think they self-identify as part of the AI Safety community. I think that TsviBT’s general point about insincerity in the AI Safety discourse is valid.
Um, no, you responded to the OP with what sure seems like a proposed alternative split. The OP’s split is about
people who self-identify as members of the AI safety community
I think you are making an actual mistake in your thinking, due to a significant gap in your thinking and not just a random thing, and with bad consequences, and I’m trying to draw your attention to it.
You make a valid point. Here’s another framing that makes the tradeoff explicit:
Group A) “Alignment research is worth doing even though it might provide cover for racing”
Group B) “The cover problem is too severe. We should focus on race-stopping work instead”
I think we’re just trying to do different things here… I’m trying to describe empirical clusters of people / orgs, you’re trying to describe positions, maybe? And I’m taking your descriptions as pointers to clusters of people, of the form “the cluster of people who say XYZ”. I think my interpretation is appropriate here because there is so much importance-weighted abject insincerity in publicly stated positions regarding AGI X-risk that it just doesn’t make much sense to focus on the stated positions as positions.
Like, the actual people at The Curve or whatever are less “I will do alignment, and will be against racing, and alas, this may provide some cover” and more “I will do fake alignment with no sense that I should be able to present any plausible connection between my work and making safe AGI, and I will directly support racing”. All the people who actually do the stated thing are generally understood to be irrelevant weirdos. The people who say that are being insincere, and in fact support racing.
I was trying to map out disagreements between people who are concerned enough about AI risk.Agreed that this represents only a fraction of the people who talk about AI risk, and that there are a lot of people who will use some of these arguments as false justifications for their support of racing.EDIT: as TsviBT pointed out in his comment, OP is actually about people who self-identify as members of the AI Safety community. Given that, I think that the two splits I mentioned above are still useful models, since most people I end up meeting who self-identify as members of the community seem to be sincere, without stated positions that differ from their actual reasons for why they do things. I have met people who I believe to be insincere, but I don’t think they self-identify as part of the AI Safety community. I think that TsviBT’s general point about insincerity in the AI Safety discourse is valid.
Um, no, you responded to the OP with what sure seems like a proposed alternative split. The OP’s split is about
I think you are making an actual mistake in your thinking, due to a significant gap in your thinking and not just a random thing, and with bad consequences, and I’m trying to draw your attention to it.