The book is right that the real answer is to try to pause AI development. It’s not the killer objection you think it is that it could be theoretically possible to build safe ASI with future technology. I also think it’s possible we could end up okay by accident. It’s still a foolish plan and I assume you’re only doing Superalignment/AI Control stuff bc you feel forced to, and that you wouldn’t choose for your safety work to have to play catch-up to a racing industry.
Holly_Elmore
Yeah I suspect that these one-shot big protests are drawing on a history of organizing in those or preceding fields. The Women’s March coalition comes together all for one big event but draws on a far on deeper history involving small demonstrations and deliberate organizing to make it to that point, is my point. Idk about Free Internet but I would bet it leaned on Free Speech organizing and advocacy.
I sure wish someone would put on a large AI Safety protest if they know a way to do this in one leap. If I got a sponsor for a concert or some other draw then perhaps I could see a larger thing happening quickly in the family of AI Safety protest, but I’d like the keep the brand pretty earnest and message-focused.
I have to note, based on our history, I interpret your posts as attacking, like the subtext is that I’m just not a good organizer and, if you wanted to, you could organize a way bigger movement way faster. If that’s true, I wish you would! I’m trying my best with my understanding of how this can work for me and I wish more people like you were embracing broad messaging like protests.
?
I’m saying he’s projecting his biases onto others. He clearly does think PauseAI rhymes with unabomber somehow, even if he personally knows better. The weird pro-tech vs anti-tech dichotomy, and especially thinking that others are blanketly anti-tech, is very rationalist.
Do you think those causes never had organizing before the big protest?
Yeah I unintentionally baited the “not always” rationalist reflex by talking normally
I think the relevant question is how often social movements begin with huge protests, and that’s exceedingly rare. It’s effective to create the impression that the people just rose up, but there’s basically always organizing groundwork for that to take off.
Do you guys seriously think that big protests just materialize?
Yeah the SF protests have been about constant (25-40) in attendance, but we have more locations now and have put a lot more infrastructure in place
The thing is there isn’t a great dataset— even with historical case studies where the primary results have been achieved, there are a million uncontrolled variables and we don’t and will never have experimentally established causation. But, yes, I’m confident in my model of social change.
What leapt out to me about your model was that is was very focused how an observer of the protests would react with a rationalist worldview. You didn’t seem to have given much thought to the breadth of social movements and how a diverse public would have experienced them. Like, most people aren’t gonna think PauseAI is anti-tech in general and therefore similar to the unabomber. Rationalists think that way, and few others.
Sounds like you are saying that you have those associations and I still see no evidence to justify your level of concern.
Small protests are the only way to get to big protests, and I don’t think there’s a significant risk of backfire or cringe reaction making trying worse than not trying. It’s the backfire supposition that is baseless.
Appreciate your conclusion tho— that reaching the public is our best shot. Fortunately, different approaches are generally multiplicative and complementary.
People usually say this when they personally don’t want to be associated with small protests.
As-is, this is mostly going to make people’s first exposure to AI X-risk be “those crazy fringe protestors”. See my initial summary regarding effective persuasion: that would be lethal, gravely sabotaging our subsequent persuasion efforts.
Pretty strong conclusion with no evidence.
Yeah, this is the first time I’ve commented on lesswrong in months and I would prefer to just be out of here. But OP was such nasty meangirl bullying that, when someone showed it to me, I wanted to push back.
Come on, William. “But they said their criticism of this person’s reputation wasn’t personal” is not good enough. It’s like calling to “no take backs” or something.
I have a history in animal activism (both EA and mainstream) and I think PETA has been massively positive by pushing the Overton window. People think PETA isn’t working bc they feel angry at PETA when they feel judged or accused, but they update on how it’s okay to treat animals, and that’s the point. More moderate groups like the Humane Society get the credit, but it takes an ecosystem. You don’t have to be popular and well-liked to push the Overton window. You also don’t have to be a group that people want to identify with.
But I don’t think PETA’s an accurate comparison for Kat. It seems like you’re comparing Kat and PETA bc you would be embarrassed to be implicated by both, not bc they have the same tactics or extremity of message. And then the claim that other people will be turned off or misinformed becomes a virtuous pretext to get them and their ideas away from your social group and identity. But you haven’t open-mindedly tried to discover what’s good for the cause. You’re just using your kneejerk reaction to justify imposing your preferences.
There’s a missing mood here—you’re not interested in learning if Kat’s strategy is effective at AI Safety. You’re just asserting that what you like would be the best for saving everyone’s lives too and don’t really seem concerned about getting the right answer to the larger question.
Again, I have contempt for treating moral issues like a matter of ingroup coolness. This is the banality of evil as far as I’m concerned. It’s natural for humans but you can do better. The LessWrong community is supposed to help people not to do this but they aren’t honest with themselves about what they get out of AI Safety, which is something very similar to what you’ve expressed in this post (gatekept community, feeling smart, a techno-utopian aesthetic) instead of trying to discover in an open-minded way what’s actually the right approach to help the world.
Yeah actually the employees of Lightcone have led the charge in trying to tear down Kat. Its you who has the better standards, Maxwell, not this site.
Getting a strong current of “being smart and having interesting and current tastes is more important than trying to combat AI Danger, and I want all my online spaces to reflect this” from this. You even seem upset that Kat is contaminating subreddits that used to not be about Safety with Safety content… Like you’re mad about progress in embrace of AI Safety. You critique her for making millennial memes as if millennials don’t exist anymore (lesswrong is millennial and older) and content should only be for you.
You seem kinda self-aware of this at one point, but doesn’t that seem really petty and selfish of you?
I appreciate how upfront you are here, bc a lot of people who feel the same way disguise it behind moralistic or technical arguments. And your clarity should make it easier for you to get over yourself and come to your senses.
Silly me, I’ll return to never interacting with you and this website. Maybe Tomas B can publish somewhere better in the future so I can continue seeing his good work.