This seems too pattern matchy to be valid reasoning? Let’s try an exercise where I rewrite the passage:
“They tried to crush us over and over again, but we wouldn’t be crushed. We drove off the AI researchers. We winkled out those who preached that superintelligence would be motivated to be moral, out of the churches and more importantly out of people’s minds. We got rid of the hardware sellers, thieving bastards, getting their dirty fingers in every deal, making every straight thing crooked. We dragged the gamers into the twenty-first century, and that was hard, that was a cruel business, and there were some painful years there, but it had to be done, we had to get the much off our boots. We realised that there were saboteurs and enemies among us, and we caught them, but it drove us mad for a while, and for a while we were seeing enemies and saboteurs everywhere, and hurting people who were brothers, sisters, good friends, honest comrades...
[...] Working for the future made the past tolerable, and therefore the present. [...] So much blood, and only one justification for it. Only one reason it could have been all right to have done such things, and aided their doing: if it had been all prologue, all only the last spasms of the death of the old, unsafe, anti-human world, and the birth of a new safe, humanistic one.”
Aha, I have compared AI regulationists to the Communists, so they lose! Keep in mind that it is not the “accelerationist” position that requires centralized control and the stopping of business-as-usual, it is the “globally stop AI” one.
(But of course the details matter. Sometimes forcing others to pay costs works out net positively for both them and for you...)
If you are actually confident that AI won’t will kill us all (say, at P > 99%) then this critique doesn’t apply to you. It applies to the folks who aren’t that confident but say to go ahead anyway.
I was assuming conditional on 1 in 20 chance of AI kills everyone
Basically I don’t think the anti “coercing others for ideological reasons” argument applies to the sort of person who thinks “well, I don’t think a 1 in 20 chance of AI killing everyone is so bad that I’m going to support a political movement trying to ban AI research; for abstract reasons I think AI is still net positive under that assumption”
But they are doing things that they believe introduce new, huge negative externalities on others without their consent. This rhymes with a historically very harmful pattern of cognition, where folks justify terrible things to themselves.
Secondly, who said anything about Pausing AI? That’s a separate matter. I’m pointing at a pattern of cognition, not advocating for a policy change.
It seems that you have to really thread the needle to get from “5% p(doom)” to “we must pause, now!”. You have to reason such that you are not self-interested but are also a great chauvinist for the human species.
This comment seems more to be resisting political action (pause AI) than pursuing it. If anything, your concern about political actors becoming monsters would more apply to the sort of people who want to create a world government to ban X globally, than people bringing up objections.
This seems too pattern matchy to be valid reasoning? Let’s try an exercise where I rewrite the passage:
Aha, I have compared AI regulationists to the Communists, so they lose! Keep in mind that it is not the “accelerationist” position that requires centralized control and the stopping of business-as-usual, it is the “globally stop AI” one.
(But of course the details matter. Sometimes forcing others to pay costs works out net positively for both them and for you...)
If you are actually confident that AI won’t will kill us all (say, at P > 99%) then this critique doesn’t apply to you. It applies to the folks who aren’t that confident but say to go ahead anyway.
I was assuming conditional on 1 in 20 chance of AI kills everyone
Basically I don’t think the anti “coercing others for ideological reasons” argument applies to the sort of person who thinks “well, I don’t think a 1 in 20 chance of AI killing everyone is so bad that I’m going to support a political movement trying to ban AI research; for abstract reasons I think AI is still net positive under that assumption”
The action / inaction distinction matters here
But they are doing things that they believe introduce new, huge negative externalities on others without their consent. This rhymes with a historically very harmful pattern of cognition, where folks justify terrible things to themselves.
Secondly, who said anything about Pausing AI? That’s a separate matter. I’m pointing at a pattern of cognition, not advocating for a policy change.
The comment you were criticizing stated
This comment seems more to be resisting political action (pause AI) than pursuing it. If anything, your concern about political actors becoming monsters would more apply to the sort of people who want to create a world government to ban X globally, than people bringing up objections.