“If you really believed X, you would do violence about it. Therefore, not X.”
I see this more often in areas other than AI risk, but regardless of context I agree it is a weak argument. It requires multiple other premises also be true. Let’s assume that people are not going to be moved by pacifism or violence is always wrong and just skip to practicals:
The violence one is capable of would cause a meaningful change in direction or time until onset of X
The targets of violence one has access to, are responsible for, or able to influence X
There will be no backlash to the violence that increases the likelihood of X
And these are just the basic ones that would rule out most reasons for violence.
If someone thought they could get away with eliminating a couple CEOs and data centers, would their belief in extinction risk justify doing it? Do they think it would change timelines? Because I don’t think it would. I give it below 1% that the loss of all the c-level execs at all frontier labs stops AI development. It jumbles up who gets there first, it changes who is personally taking what actions. But it doesn’t have a high chance of saving anyone, as far as I can tell. So bullet one applies. Also, I am quite confident that this would make the AI safety advocates lose ground, be portrayed badly in media (classic and social both), and thus it would more than 50% likely work against the goal of stopping AI risk (bullet 3), not for it.
There are very few people for whom violence is a realistic option. Because of these unstated premises.
The Gen Z riots globally , by some metrics have been doing pretty good. For example, the Wikipedia page claims 3 fails, 7 successes, and 14 ongoing with 4 of those alt least partly successful. This tallies to multiple governments overthrown, and several corrupt regimes having to back down on measures for self-enrichment. However, the new regimes appear to be just as corrupt, but favoring a different group. If anti-corruption was the goal, then bullet 1 applies and violence isn’t helping. If the goal is “the other team loses” then perhaps the Wikipedia claim holds up.
AI safety seems to be in a similar scenario where the most likely outcomes of violence are at best changing who gets their first, not whether we get there at all. I don’t see this as justification for violence, and to me the original claim is not valid.
“If you really believed X, you would do violence about it. Therefore, not X.”
I see this more often in areas other than AI risk, but regardless of context I agree it is a weak argument. It requires multiple other premises also be true. Let’s assume that people are not going to be moved by pacifism or violence is always wrong and just skip to practicals:
The violence one is capable of would cause a meaningful change in direction or time until onset of X
The targets of violence one has access to, are responsible for, or able to influence X
There will be no backlash to the violence that increases the likelihood of X
And these are just the basic ones that would rule out most reasons for violence.
If someone thought they could get away with eliminating a couple CEOs and data centers, would their belief in extinction risk justify doing it? Do they think it would change timelines? Because I don’t think it would. I give it below 1% that the loss of all the c-level execs at all frontier labs stops AI development. It jumbles up who gets there first, it changes who is personally taking what actions. But it doesn’t have a high chance of saving anyone, as far as I can tell. So bullet one applies. Also, I am quite confident that this would make the AI safety advocates lose ground, be portrayed badly in media (classic and social both), and thus it would more than 50% likely work against the goal of stopping AI risk (bullet 3), not for it.
There are very few people for whom violence is a realistic option. Because of these unstated premises.
The Gen Z riots globally , by some metrics have been doing pretty good. For example, the Wikipedia page claims 3 fails, 7 successes, and 14 ongoing with 4 of those alt least partly successful. This tallies to multiple governments overthrown, and several corrupt regimes having to back down on measures for self-enrichment. However, the new regimes appear to be just as corrupt, but favoring a different group. If anti-corruption was the goal, then bullet 1 applies and violence isn’t helping. If the goal is “the other team loses” then perhaps the Wikipedia claim holds up.
AI safety seems to be in a similar scenario where the most likely outcomes of violence are at best changing who gets their first, not whether we get there at all. I don’t see this as justification for violence, and to me the original claim is not valid.