I agree it’s getting used publicly. And, to be clear, I don’t have a that strong an opinion on this, I’m not defending the phrase super hard. But, you haven’t actually justified that a bad thing is definitely happening from my perspective.
Some people on the internet think a thing sounds dumb, sure. The thing is that pushing an overton window basically always has people laughing at you and thinking you’re dumb, regardless. People say AI concerns are a weird silly outlandish doomer cult no matter how everything is phrased.
The goal here (on the part of the people saying the phrase) is not “build the biggest tent”, nor is it “minimize sounding dumb”. It’s “speak plainly and actually convey a particular really bad thing that is likely to happen. Ensure enough / the-right people to notice that an actual really bad thing is likely to happen, which people don’t gloss over and minimize.”
Your post presumes “we’re trying to build a big tent movement, and it should include things other than AI killing everyone.” But, that’s in fact we spent several years where most of the public messaging was big-tent-ish. And it seemed like this did not actually succeed strategically.
Put another way – I agree that maybe it’s correct to not sound dumb here. But I absolutely think you need to be willing to sound dumb, if that turns out to be the correct strategy. When I see posts like this I think they are often driven by a generator that is not actually about optimizing for winning at a strategic goal, but about avoiding social stigma (which is a very scary thing).
(I think there are counter-problems within the LW sphere of being too willing to be contrarian and edgy. But you currently haven’t done any work to justify that the problem here is being too edgy rather than not enough)
(Meanwhile I super endorse trying to come up with non-dumb-sounding things that actually achieve the various goals. But, note that the people-saying-AI-notkilleveryonism are specifically NOT optimizing for “build the biggest tent”)
People say AI concerns are a weird silly outlandish doomer cult no matter how everything is phrased.
No, you’re dead wrong here. Polls show widespread popular concern about AI developments. You should not give up on “not seeming like a weird silly outlandish doomer cult”. If you want to actually get things done, you cannot give up on that.
Hmm. So I do agree the recent polls that showed support for “generally worried” and “the Pause open letter” are an important strategic consideration here. I do think it’s fairly reasonable to argue “look man you actually have the public support, please don’t fuck it up.”
So, thank you for bringing that up.
It still feels like it’s not actually a counterargument to the particular point I was making – I do think there are (many) people who respond to taking AI extinction risk seriously with ridicule, no matter how carefully it’s phrased. So if you’re just running the check of “did anyone respond negatively to this?” the check will basically always return “yes”, and it takes a more careful look at the situation to figure out what kind of communications strategy actually works.
I agree it’s getting used publicly. And, to be clear, I don’t have a that strong an opinion on this, I’m not defending the phrase super hard. But, you haven’t actually justified that a bad thing is definitely happening from my perspective.
Some people on the internet think a thing sounds dumb, sure. The thing is that pushing an overton window basically always has people laughing at you and thinking you’re dumb, regardless. People say AI concerns are a weird silly outlandish doomer cult no matter how everything is phrased.
The goal here (on the part of the people saying the phrase) is not “build the biggest tent”, nor is it “minimize sounding dumb”. It’s “speak plainly and actually convey a particular really bad thing that is likely to happen. Ensure enough / the-right people to notice that an actual really bad thing is likely to happen, which people don’t gloss over and minimize.”
Your post presumes “we’re trying to build a big tent movement, and it should include things other than AI killing everyone.” But, that’s in fact we spent several years where most of the public messaging was big-tent-ish. And it seemed like this did not actually succeed strategically.
Put another way – I agree that maybe it’s correct to not sound dumb here. But I absolutely think you need to be willing to sound dumb, if that turns out to be the correct strategy. When I see posts like this I think they are often driven by a generator that is not actually about optimizing for winning at a strategic goal, but about avoiding social stigma (which is a very scary thing).
(I think there are counter-problems within the LW sphere of being too willing to be contrarian and edgy. But you currently haven’t done any work to justify that the problem here is being too edgy rather than not enough)
(Meanwhile I super endorse trying to come up with non-dumb-sounding things that actually achieve the various goals. But, note that the people-saying-AI-notkilleveryonism are specifically NOT optimizing for “build the biggest tent”)
No, you’re dead wrong here. Polls show widespread popular concern about AI developments. You should not give up on “not seeming like a weird silly outlandish doomer cult”. If you want to actually get things done, you cannot give up on that.
Hmm. So I do agree the recent polls that showed support for “generally worried” and “the Pause open letter” are an important strategic consideration here. I do think it’s fairly reasonable to argue “look man you actually have the public support, please don’t fuck it up.”
So, thank you for bringing that up.
It still feels like it’s not actually a counterargument to the particular point I was making – I do think there are (many) people who respond to taking AI extinction risk seriously with ridicule, no matter how carefully it’s phrased. So if you’re just running the check of “did anyone respond negatively to this?” the check will basically always return “yes”, and it takes a more careful look at the situation to figure out what kind of communications strategy actually works.
I think we’re on the same page here. Sorry if I was overly aggressive there, I just have strong opinions on that particular subtopic.