Things are different, however, when you are releasing a public-facing product, and talking about questionably defined “AI ethics” in a literal press release.
I didn’t do this, and LessWrong didn’t do this.
For the future I’ll see if I can link to more essays (over social media posts) when giving evidence about potentially important outside opinions.
To be clear, as a rule I’m just not reading it if it’s got social media screenshots about LW discussion, unless the social media author is someone who also writes good and original essays online.
I don’t want LessWrong to be a cudgel in a popularity contest, and you responding to my comment by saying you’ll aim to give higher quality PR advice in the future, is missing my point.
I don’t know what your take is on AI governance, but a significant (essay-writing) portion of this community believes it to be important.
Citation needed? Anyway, my take is that using LW’s reputation in a popularity tug-of-war is a waste of our reputation. Plus you’ll lose.
In order to do effective work there, we will need to be in a position where politicians and business leaders in tech can work with us with minimal friction.
Just give up on that. You will not get far with that.
We shouldn’t bend over backwards to accommodate critics, but if we can make our own jobs easier by, say, better communicating our goals in our public-facing work, why not do that?
I don’t know why you are identifying “ML developers” with “LessWrong users”, the two groups are not much overlapping.
This mistake is perhaps what leads you, in the OP, to not only give PR advice, but to give tactical advice on how to get censorship past people without them noticing, which seems unethical to me. In contrast I would encourage making your censorship blatant so that people know that they can trust you to not be getting one over on them when you speak.
I’m not trying to be wholly critical, I do have admiration for many things in your artistic and written works, but reading this post, I suggest doing a halt, melt, catch fire, and finding a new way to try to help out with the civilizational ruin coming our way from AI. I want LessWrong to be a place of truth and wisdom, I never want LessWrong to be a place where you can go to get tactical advice on how to get censorship past people to comply with broad political pressures in the populace.
I didn’t do this, and LessWrong didn’t do this.
To be clear, as a rule I’m just not reading it if it’s got social media screenshots about LW discussion, unless the social media author is someone who also writes good and original essays online.
I don’t want LessWrong to be a cudgel in a popularity contest, and you responding to my comment by saying you’ll aim to give higher quality PR advice in the future, is missing my point.
Citation needed? Anyway, my take is that using LW’s reputation in a popularity tug-of-war is a waste of our reputation. Plus you’ll lose.
Just give up on that. You will not get far with that.
I don’t know why you are identifying “ML developers” with “LessWrong users”, the two groups are not much overlapping.
This mistake is perhaps what leads you, in the OP, to not only give PR advice, but to give tactical advice on how to get censorship past people without them noticing, which seems unethical to me. In contrast I would encourage making your censorship blatant so that people know that they can trust you to not be getting one over on them when you speak.
I’m not trying to be wholly critical, I do have admiration for many things in your artistic and written works, but reading this post, I suggest doing a halt, melt, catch fire, and finding a new way to try to help out with the civilizational ruin coming our way from AI. I want LessWrong to be a place of truth and wisdom, I never want LessWrong to be a place where you can go to get tactical advice on how to get censorship past people to comply with broad political pressures in the populace.