If Daniel Alejandro Moreno-Gama had a LessWrong account, then I, using my available tools as an admin and all publicly-reported usernames I’ve seen, cannot find it.
Arson is very bad. If he did what the news articles say he did, he is a villain. If you buy the premise that AI is on track to kill everyone (which I mostly do), the correct conclusion is that we need a political and regulatory solution. AI-risk-motivated violence is bad for all the usual, extremely important reasons, and is additionally bad because it undermines that.
I have seen screenshots showing him as a participant on the PauseAI Discord, under the username “Butlerian Jihadist”. Specifically, a screenshot of a moderator warning him that advocating violence is grounds for a ban there. It would also be grounds for a ban on LW. And, to be clear, that’s because violence is actually bad; it’s not just about talk, and no one I know changes their stance when the conversations are more discreet.
Specifically, a screenshot of a moderator warning him that advocating violence is grounds for a ban there. It would also be grounds for a ban on LW.
No, this isn’t true, and I am ultimately head-moderator. I think many people will encounter thoughts and ideas around whether violence is appropriate when they encounter the existential stakes of AI. Discussing whether those ideas are right or wrong is very much a thing I want LessWrong to be able to do.
I think they are almost universally wrong, but people arriving at that conclusion will do so more likely with argument, (and who knows, we do not live in a world where we can truly always rely on never needing to take up arms in some form or another and there certainly are edge-cases here worthy of deliberation). I would much rather someone who is thinking of violence to come here, and be met with genuine and real arguments, instead of being driven into the shadows while feeling like people are censoring any discussion of this leaving them with no choice but to make up their own mind, all on their own without any help, on this extremely difficult and high-stakes decision.
Discussing or advocating violence is not banned on LessWrong (though I would be surprised if it isn’t met with very consistent opposition in practically all cases). This also doesn’t mean that all discussion of violence is permitted. If you are being a dick, or are causing discussions to go off the rails, all the usual moderation rules apply, on all sides of any discussions here.
What is the point of this? Say you find this criminal’s LessWrong profile—what is the benefit of exposing it? On the flipside, there are downsides, namely violating norms of privacy.
This is just elevating your aesthetic preference for what the violence you’re advocating for looks like to a moral principle. The claim that throwing a Molotov cocktail at one guy’s house is counterproductive to the goal of “bombing the datacenters” is a better argument, though one I do not believe.
Of course, it still makes sense for you to enforce these policies. Because you fear the violence the state might bring down on you if you don’t.
I could spell out the relevant differences here, but I don’t believe you’re genuinely confused about this. Instead, you got the idea that drawing a false equivalence between regulation and throwing a molotov cocktail was a rhetorical weapon you could use. Maybe you tried it out in some echo chambers, and got positive feedback from some people who also pretended to be confused in this way.
I’m not pretending to confusion. I’m calling out the hypocrisy in your sanctimonious denunciation of some minor ineffectual violence while simultaneously publicly advocating for far worse, just gussied up. But no, I did not expect any response from you other than the typical reaction to any such special pleading being pointed out: “false equivalence,” “whataboutism”, “tu quoque fallacy” etc.
Government regulations come into being through political processes which at least somewhat track truth and the collective interests of voters. If the arguments that superintelligence is not worth the risk are compelling enough, then governments will ban building it; if they aren’t, they won’t. It’s far from perfect in the United States, but it sure as heck beats having individual outlier people attempting to implement their preferred decision with violence.
Government regulations come with enforcement mechanisms, which, somewhere along the escalation chain, wind up including imprisonment. Those regulations have violence lurking in the background behind them, mut most of the time, in practice, lurking in the background is as far as it goes. Lawyers warn businesses away from doing things that are banned, and then no one goes to jail. It’s far from perfect, but the US legal system has had a lot of effort invested into making it predictable and proportionate.
“Political processes which at least somewhat track truth and the collective interests of voters” applies to Molotov cocktails as well… there’s a reason one common definition of “the government” is “the successful claim of a monopoly on violence”.
Yes, yes, governments are more sophisticated than stochastic social media terrorists. They have processes and checks and whatnot. This means their violence is more likely to actually be in their self-interest, and not out of emotional spite or delusional grandeur. So? Read Shankar’s original comment:
This is just elevating your aesthetic preference for what the violence you’re advocating for looks like to a moral principle. The claim that throwing a Molotov cocktail at one guy’s house is counterproductive to the goal of “bombing the datacenters” is a better argument, though one I do not believe.
He correctly identified that you are really saying, “individuals are prone to take counterproductive violent actions,” and thus the correct refutation is to say that instead of pretending your heuristic is a moral tautology. It clearly isn’t a tautology, or someone couldn’t have thrown a cocktail!
If Daniel Alejandro Moreno-Gama had a LessWrong account, then I, using my available tools as an admin and all publicly-reported usernames I’ve seen, cannot find it.
Arson is very bad. If he did what the news articles say he did, he is a villain. If you buy the premise that AI is on track to kill everyone (which I mostly do), the correct conclusion is that we need a political and regulatory solution. AI-risk-motivated violence is bad for all the usual, extremely important reasons, and is additionally bad because it undermines that.
I have seen screenshots showing him as a participant on the PauseAI Discord, under the username “Butlerian Jihadist”. Specifically, a screenshot of a moderator warning him that advocating violence is grounds for a ban there. It would also be grounds for a ban on LW. And, to be clear, that’s because violence is actually bad; it’s not just about talk, and no one I know changes their stance when the conversations are more discreet.
No, this isn’t true, and I am ultimately head-moderator. I think many people will encounter thoughts and ideas around whether violence is appropriate when they encounter the existential stakes of AI. Discussing whether those ideas are right or wrong is very much a thing I want LessWrong to be able to do.
I think they are almost universally wrong, but people arriving at that conclusion will do so more likely with argument, (and who knows, we do not live in a world where we can truly always rely on never needing to take up arms in some form or another and there certainly are edge-cases here worthy of deliberation). I would much rather someone who is thinking of violence to come here, and be met with genuine and real arguments, instead of being driven into the shadows while feeling like people are censoring any discussion of this leaving them with no choice but to make up their own mind, all on their own without any help, on this extremely difficult and high-stakes decision.
Discussing or advocating violence is not banned on LessWrong (though I would be surprised if it isn’t met with very consistent opposition in practically all cases). This also doesn’t mean that all discussion of violence is permitted. If you are being a dick, or are causing discussions to go off the rails, all the usual moderation rules apply, on all sides of any discussions here.
What is the point of this? Say you find this criminal’s LessWrong profile—what is the benefit of exposing it? On the flipside, there are downsides, namely violating norms of privacy.
This is just elevating your aesthetic preference for what the violence you’re advocating for looks like to a moral principle. The claim that throwing a Molotov cocktail at one guy’s house is counterproductive to the goal of “bombing the datacenters” is a better argument, though one I do not believe.
Of course, it still makes sense for you to enforce these policies. Because you fear the violence the state might bring down on you if you don’t.
I could spell out the relevant differences here, but I don’t believe you’re genuinely confused about this. Instead, you got the idea that drawing a false equivalence between regulation and throwing a molotov cocktail was a rhetorical weapon you could use. Maybe you tried it out in some echo chambers, and got positive feedback from some people who also pretended to be confused in this way.
I’m not pretending to confusion. I’m calling out the hypocrisy in your sanctimonious denunciation of some minor ineffectual violence while simultaneously publicly advocating for far worse, just gussied up.
But no, I did not expect any response from you other than the typical reaction to any such special pleading being pointed out: “false equivalence,” “whataboutism”, “tu quoque fallacy” etc.
Government regulations come into being through political processes which at least somewhat track truth and the collective interests of voters. If the arguments that superintelligence is not worth the risk are compelling enough, then governments will ban building it; if they aren’t, they won’t. It’s far from perfect in the United States, but it sure as heck beats having individual outlier people attempting to implement their preferred decision with violence.
Government regulations come with enforcement mechanisms, which, somewhere along the escalation chain, wind up including imprisonment. Those regulations have violence lurking in the background behind them, mut most of the time, in practice, lurking in the background is as far as it goes. Lawyers warn businesses away from doing things that are banned, and then no one goes to jail. It’s far from perfect, but the US legal system has had a lot of effort invested into making it predictable and proportionate.
“Political processes which at least somewhat track truth and the collective interests of voters” applies to Molotov cocktails as well… there’s a reason one common definition of “the government” is “the successful claim of a monopoly on violence”.
Yes, yes, governments are more sophisticated than stochastic social media terrorists. They have processes and checks and whatnot. This means their violence is more likely to actually be in their self-interest, and not out of emotional spite or delusional grandeur. So? Read Shankar’s original comment:
He correctly identified that you are really saying, “individuals are prone to take counterproductive violent actions,” and thus the correct refutation is to say that instead of pretending your heuristic is a moral tautology. It clearly isn’t a tautology, or someone couldn’t have thrown a cocktail!