I think if your main interactions with PauseAI is a certain Twitter account, as served to you by the algorithm in interactions with your AI safety friends, then you might think that they’re mostly going after other, more moderate safety advocates. But this just isn’t a good picture of the overall actions of the movement. At least in the case of PauseAI UK, of which I have a decent understanding of our inner workings, essentially zero time is spent thinking about other AI safety advocates. I expect that the same is true of Yudkowsky and MIRI.
Of course it is the case being rude towards people working on safety teams at OpenAI on Twitter makes some things worse on some axes. And this is mostly bad and pointless and I don’t endorse it. But that’s not even really what that post from Rob was doing! Rob was writing an opinionated, but civil, criticism. In what way is this “knifing” the other AI safety advocates? It’s not like MIRI killed SB 1047.
Now if Scott means something like “Giving money to MIRI pushes the world in the MIRI-preferred direction, and this would have meant no Anthropic and no safety team at OpenAI” then I can kind of maybe see what he means here. This just isn’t “knifing” in the sense of the betrayal that most people mean by the word. It’s just opposing someone’s plan, in a way that they’ve been doing for years. It’s not like MIRI would have actually used marginal resources to stop Anthropic from being created by, like, sabotage or something.
MIRI don’t even say that working in safety is bad! They only say that they think their approach is better. IABIED specifically states that they think mech interp researchers are “heroes” (as part an example of research they think won’t work in time without political action).
I think if your main interactions with PauseAI is a certain Twitter account, as served to you by the algorithm in interactions with your AI safety friends, then you might think that they’re mostly going after other, more moderate safety advocates. But this just isn’t a good picture of the overall actions of the movement. At least in the case of PauseAI UK, of which I have a decent understanding of our inner workings, essentially zero time is spent thinking about other AI safety advocates. I expect that the same is true of Yudkowsky and MIRI.
Of course it is the case being rude towards people working on safety teams at OpenAI on Twitter makes some things worse on some axes. And this is mostly bad and pointless and I don’t endorse it. But that’s not even really what that post from Rob was doing! Rob was writing an opinionated, but civil, criticism. In what way is this “knifing” the other AI safety advocates? It’s not like MIRI killed SB 1047.
Now if Scott means something like “Giving money to MIRI pushes the world in the MIRI-preferred direction, and this would have meant no Anthropic and no safety team at OpenAI” then I can kind of maybe see what he means here. This just isn’t “knifing” in the sense of the betrayal that most people mean by the word. It’s just opposing someone’s plan, in a way that they’ve been doing for years. It’s not like MIRI would have actually used marginal resources to stop Anthropic from being created by, like, sabotage or something.
MIRI don’t even say that working in safety is bad! They only say that they think their approach is better. IABIED specifically states that they think mech interp researchers are “heroes” (as part an example of research they think won’t work in time without political action).