You can imagine an argument that goes “Violence against AI labs is justified in spite of the direct harm it does, because it would prevent progress towards AGI.” I have only ever heard people say that someone else’s views imply this argument, and never actually heard someone actually advance this argument sincerely; nevertheless the hypothetical argument is at least coherent.
Yudkowsky’s position is that the argument above is incorrect because he denies the premise that using violence in this way would actually prevent progress towards AGI. See e.g. here and the following dialogue. (I assume he also believes in the normal reasons why clever one-time exceptions to the taboo against violence are unpersuasive.)
I’m strongly against letting anyone insert anything into the middle of someone else’s post/comment. Nothing should grab the microphone away from the author until they’ve finished speaking.
When Medium added the feature that let readers highlight an author’s text, I found it incredibly disruptive and never read anything on Medium ever again. If LW implemented inline reader commentary in a way that was similarly distracting, that would probably be sufficient to drive me away from here, too.