Very glad of this post. Thanks for broaching, Buck.
Status: I’m an old nerd, lately ML R&D, who dropped career and changed wheelhouse to volunteer at Pause AI.
Two comments on the OP:
details of the current situation are much more interesting to me. In contrast, radicals don’t really care about e.g. the different ways that corporate politics affects AI safety interventions at different AI companies.
As per Joseph’s response: this does not match me or my general experience of AI safety activism.
Concretely, a recent campaign was specifically about Deep Mind breaking particular voluntary testing commitments, with consideration of how staff would feel.
Radicals often seem to think of AI companies as faceless bogeymen thoughtlessly lumbering towards the destruction of the world.
I just cannot do this myself.
(There is some amount of it around, but also it is not without value. See later.)
Gideon F:
This strikes me as a fairly strong strawman. My guess if the vast majority of thoughtful radicals basically have a similar view to you.
Reporting from inside: I rate it a good guess, especially when you weight by “thoughtful”.
For illustration, imagine I donate to Pause AI (or joined one of their protests with one of the more uncontroversial protest signs), but I still care a lot about what the informed people who are convinced of Anthropic’s strategy have to say. Imagine I don’t think they’re obviously unreasonable, I try to pass their Ideological Turing test, I care about whether they consider me well-informed, etc.
Anthony feels seen / imagined.
If those conditions are met, then I might still retain some of the benefits you list.
Some for sure. The important one I noticed struggling to get is engaged two-way conversation with frontier lab folk. Trade-off.
Back to faceless companies: some activists, including thoughtful ones, are more angry than me. (Anthropic tend to be a litmus test. Which is fun given their pH variance week to week.)
Exasperated steel man: these lab folk are externalizing the costs of their own risk models and tolerances without any consent. This doesn’t seem very epistemically humble. But I get that the virtue math is fragile and so I feel sympathy and empathy for many parties here.
Still, for both emotional health of the activists and odds of public impact, radicals helping each other feel some aggravated anger does seem sane. In this regard as others, I find there are worthwhile things to learn and eval from the experience of campaigners who were never in EA on LessWrong.
I’ll risk another quote without huge development—williawa:
“The right amount of politics is not zero, even though it really is the mind killer”. But I also think, arguments for taking AI x-risk very seriously, are unusually strong compared with most political debates.
For me: well-phrased, then insightful.
Lastly, Kaleb:
In the leftist political sphere, this distinction is captured by the names “reformers” vs “revolutionaries”, and the argument about which approach to take has been going on forever.
and Lukas again:
whether radical change goes through mass advocacy and virality vs convincing specific highly-informed groups and experts, seems like somewhat of an open question and might depend on the specifics.
My response to both of these is pretty “porque no los dos”. This is not zero sum. Let us apply disjunctive effort.
It is even the case that a “pincer movement” helps: a radical flank primes an audience for moderate persuasion. (This isn’t my driver: of course I express my real position. But it makes me less worried about harm if I’m on the wrong side.)
Very glad of this post. Thanks for broaching, Buck.
Status: I’m an old nerd, lately ML R&D, who dropped career and changed wheelhouse to volunteer at Pause AI.
Two comments on the OP:
As per Joseph’s response: this does not match me or my general experience of AI safety activism.
Concretely, a recent campaign was specifically about Deep Mind breaking particular voluntary testing commitments, with consideration of how staff would feel.
I just cannot do this myself.
(There is some amount of it around, but also it is not without value. See later.)
Gideon F:
Reporting from inside: I rate it a good guess, especially when you weight by “thoughtful”.
Anthony feels seen / imagined.
Some for sure. The important one I noticed struggling to get is engaged two-way conversation with frontier lab folk. Trade-off.
Back to faceless companies: some activists, including thoughtful ones, are more angry than me. (Anthropic tend to be a litmus test. Which is fun given their pH variance week to week.)
Exasperated steel man: these lab folk are externalizing the costs of their own risk models and tolerances without any consent. This doesn’t seem very epistemically humble. But I get that the virtue math is fragile and so I feel sympathy and empathy for many parties here.
Still, for both emotional health of the activists and odds of public impact, radicals helping each other feel some aggravated anger does seem sane. In this regard as others, I find there are worthwhile things to learn and eval from the experience of campaigners who were never in EA on LessWrong.
I’ll risk another quote without huge development—williawa:
For me: well-phrased, then insightful.
Lastly, Kaleb:
and Lukas again:
My response to both of these is pretty “porque no los dos”. This is not zero sum. Let us apply disjunctive effort.
It is even the case that a “pincer movement” helps: a radical flank primes an audience for moderate persuasion. (This isn’t my driver: of course I express my real position. But it makes me less worried about harm if I’m on the wrong side.)