killing puppies doesn’t cure cancer. You can kill one hundred puppies and still not save your kid.
I get you’re trying to show how commuting an obviously evil act won’t fix your unrelated problems magically, but I think you’re pushing too far on the “evil act” part of things and no enough on representing the reasoning of people who think killing Sam Altman would help somehow. Like, whomever threw that molotov cocktail probably wouldn’t feel your example captured how they’re thinking about this. But they and others who reason like them are the ones who need to internalize your point!
Now, I don’t know exactly what went on inside that guy’s head. But I think it might be something like this. “Sam Altman has some causal influence on AI development. He’s part of what’s causing the race! So if we get rid of him, we gain time.” This is obviously an impoverished mental model, and it’s operating more on associations or vibes than causal mechanisms.
So a better example would replace puppies with something associated with increasing cancer. Perhaps “cigarette smokers” or “nuclear power plants”. “If I kill all the cigarette smokers then my daughter’s cancer won’t resurge”. Or perhaps you have someone on a noble crusade to end cancer, and they decide to bomb all the nuclear power plants. Then the analogy to “killing sam altman will reduce AI x-risk” would be tighter.
EDIT: Also, thanks for writing the post I wanted to write.
I agree with the need to accurately model the thinking of anti-extinction madmen to better communicate with, and de-escalate them. I think the thinking might be “Sam Altman is one of the actors driving the race towards dangerous AI capabilities. The current environment seems to incentivize this behaviour. If I commit a visible violent act towards him, it will reduce the dangerous incentive, after all, people want money and prestige, but they don’t want to have their property vandalized or die violently.”
They may also have been thinking of this as a commitment signal. Throwing fire at someones house is a very bad thing to do, both in terms of the effect it could have on the victim and the effect it will likely have on the perpetrator. To know that and still be willing to do it could be seen as a signal of conviction to the believe that Sam Altman’s actions, and the actions of large AI companies, are harmful. Unfortunately, it can also be seen as a signal of the perpetrator being violently insane, and a signal that the anti-extinctionists are violently insane. Ironic and unfortunate.
Also to the end of de-escalating madmen, I think we need more compressed versions of the essence of this post. Maybe something like “global GPU control is the only sufficient control against ASI, anything that doesn’t move us towards international coordination is counterproductive”.
“Also if I am going to advocate for others to kill and commit crimes, then I must lead by example and show that I am fully sincere in my message.”
This is someone whose open interest in violence was explicitly rejected by at least two different activist groups (Stop AI and Pause AI) from what I’ve heard.
I get you’re trying to show how commuting an obviously evil act won’t fix your unrelated problems magically, but I think you’re pushing too far on the “evil act” part of things and no enough on representing the reasoning of people who think killing Sam Altman would help somehow. Like, whomever threw that molotov cocktail probably wouldn’t feel your example captured how they’re thinking about this. But they and others who reason like them are the ones who need to internalize your point!
Now, I don’t know exactly what went on inside that guy’s head. But I think it might be something like this. “Sam Altman has some causal influence on AI development. He’s part of what’s causing the race! So if we get rid of him, we gain time.” This is obviously an impoverished mental model, and it’s operating more on associations or vibes than causal mechanisms.
So a better example would replace puppies with something associated with increasing cancer. Perhaps “cigarette smokers” or “nuclear power plants”. “If I kill all the cigarette smokers then my daughter’s cancer won’t resurge”. Or perhaps you have someone on a noble crusade to end cancer, and they decide to bomb all the nuclear power plants. Then the analogy to “killing sam altman will reduce AI x-risk” would be tighter.
EDIT: Also, thanks for writing the post I wanted to write.
I agree with the need to accurately model the thinking of anti-extinction madmen to better communicate with, and de-escalate them. I think the thinking might be “Sam Altman is one of the actors driving the race towards dangerous AI capabilities. The current environment seems to incentivize this behaviour. If I commit a visible violent act towards him, it will reduce the dangerous incentive, after all, people want money and prestige, but they don’t want to have their property vandalized or die violently.”
They may also have been thinking of this as a commitment signal. Throwing fire at someones house is a very bad thing to do, both in terms of the effect it could have on the victim and the effect it will likely have on the perpetrator. To know that and still be willing to do it could be seen as a signal of conviction to the believe that Sam Altman’s actions, and the actions of large AI companies, are harmful. Unfortunately, it can also be seen as a signal of the perpetrator being violently insane, and a signal that the anti-extinctionists are violently insane. Ironic and unfortunate.
Also to the end of de-escalating madmen, I think we need more compressed versions of the essence of this post. Maybe something like “global GPU control is the only sufficient control against ASI, anything that doesn’t move us towards international coordination is counterproductive”.
According to the criminal complaint, he explicitly said so.
This is someone whose open interest in violence was explicitly rejected by at least two different activist groups (Stop AI and Pause AI) from what I’ve heard.