I have weak-downvoted this comment. I don’t know what generated it, but from the outside it looks to me like ignoring a very important aspect of reality (public opinion on the words “AI safety”) in favor of… not exactly sure what? Protecting tribal instincts?
In this case the quoting feels quite adequate to me, since the quotes are not necessarily endorsed, but examined as a phenomenon in the world, and its implications.
As part of doing anything interesting in the world and saying it out loud on the internet, lots of people on the internet will spout text about you, and I think it’s not interesting or worthwhile to read.
Feynman asks “What do you care what other people think?” which I extend here to “Why do you care to seek out and read what other people think?”
I have a theory that, essentially, all real thinking on the internet gets done in essay form, and anything that is not the form of an essay does not contain real or original thinking, and should rise to a very high bar before its worth engaging with e.g. social media, a lot of scientific papers. For instance, anyone who tweets anything I find genuinely interesting, also writes essays (Paul Graham, Eliezer Yudkowsky, Aella, Venkatesh Rao, and so on).
I have difficulty imagining a world where public discourse on the internet matters AND the people engaging with it aren’t having a spout of bad content written about them. The fact that people are spouting negative content about AI safety is not surprising, and in my experience their ideas are of little worth (with the exception of people who write essays).
And of course, many actions that I think might improve the world, are outside of the overton window. Suppose I want to discuss them with other thoughtful LessWrongers. Should I not do so because it will cause people to spout negative text about us, or should I do so and avoid caring about the negativity? I deem it to be the latter.
Thanks for the detailed response, I really appreciate it! For the future I’ll see if I can link to more essays (over social media posts) when giving evidence about potentially important outside opinions. I’m going offline in a few minutes, but will try to add some links here as well when I get back on Sunday.
As for the importance of outside opinions that aren’t in essay form, I fully agree with you that some amount of critique is inevitable if you are doing good, impactful work. I also agree we should not alter our semi-private conversations on LessWrong and elsewhere to accommodate (bad-faith) critics. Things are different, however, when you are releasing a public-facing product, and talking about questionably defined “AI ethics” in a literal press release. There, everything is about perception, and you should expect people to be influenced heavily by your wording (if your PR folks are doing their jobs right 🙃).
Why should we care about the non-essay-writing-public? Well, one good reason is politics. I don’t know what your take is on AI governance, but a significant (essay-writing) portion of this community believes it to be important. In order to do effective work there, we will need to be in a position where politicians and business leaders in tech can work with us with minimal friction. If there is one thing politicians (and to a lesser degree some corporations) care about, it is general public perception, and while they are generally fine with very small minority pushback, if the general vibe in Silicon valley becomes “AI ethicists are mainly partisan, paternalistic censors,” then there becomes a very strong incentive not to work with us.
Unfortunately, I believe that the above vibe has been growing both on and offline as a result of actions which members of this community have had some amount of control over. We shouldn’t bend over backwards to accommodate critics, but if we can make our own jobs easier by, say, better communicating our goals in our public-facing work, why not do that?
Things are different, however, when you are releasing a public-facing product, and talking about questionably defined “AI ethics” in a literal press release.
I didn’t do this, and LessWrong didn’t do this.
For the future I’ll see if I can link to more essays (over social media posts) when giving evidence about potentially important outside opinions.
To be clear, as a rule I’m just not reading it if it’s got social media screenshots about LW discussion, unless the social media author is someone who also writes good and original essays online.
I don’t want LessWrong to be a cudgel in a popularity contest, and you responding to my comment by saying you’ll aim to give higher quality PR advice in the future, is missing my point.
I don’t know what your take is on AI governance, but a significant (essay-writing) portion of this community believes it to be important.
Citation needed? Anyway, my take is that using LW’s reputation in a popularity tug-of-war is a waste of our reputation. Plus you’ll lose.
In order to do effective work there, we will need to be in a position where politicians and business leaders in tech can work with us with minimal friction.
Just give up on that. You will not get far with that.
We shouldn’t bend over backwards to accommodate critics, but if we can make our own jobs easier by, say, better communicating our goals in our public-facing work, why not do that?
I don’t know why you are identifying “ML developers” with “LessWrong users”, the two groups are not much overlapping.
This mistake is perhaps what leads you, in the OP, to not only give PR advice, but to give tactical advice on how to get censorship past people without them noticing, which seems unethical to me. In contrast I would encourage making your censorship blatant so that people know that they can trust you to not be getting one over on them when you speak.
I’m not trying to be wholly critical, I do have admiration for many things in your artistic and written works, but reading this post, I suggest doing a halt, melt, catch fire, and finding a new way to try to help out with the civilizational ruin coming our way from AI. I want LessWrong to be a place of truth and wisdom, I never want LessWrong to be a place where you can go to get tactical advice on how to get censorship past people to comply with broad political pressures in the populace.
I mostly agree with what you wrote (especially the “all real thinking on the internet gets done in essay form” is very interesting, though I might push back against that a bit and point to really good comments, podcasts & forecasting platforms). I do endorse the negative sentiment around privately owned social media companies (as in me wishing them to burn in hell) for any purpose other than the most inane shitposting, and would prefer everyone interested in making intellectual progress to abandon them (yes, that also includes substacks).
Ahem.
I guess you approach the tweets by judging whether their content is useful to engage with qua content (“is it true or interesting what those people are saying?”, which, I agree with you, is not the case), as opposed to approaching it sociologically (“what does the things those people are saying predict about how they will vote & especially act in the future?”). Similarly, I might not care about how a barometer works, but I’d still want to use it to predict storms (I, in fact, care about knowing how barometers work, and just spend 15 minutes reading the Wikipedia article). The latter still strikes me as important, though I get the “ick” and “ugh” reaction against engaging in public relations, and I’m happy I’m obscure enough to not have to bother about it. But in the unlikely case a big newspaper would run a huge smear campaign against me, I’d want to know!
And then think hard about next steps: maybe hiring public relations people to deal with it? Or gracefully responding with a public clarification?
I have weak-downvoted this comment. I don’t know what generated it, but from the outside it looks to me like ignoring a very important aspect of reality (public opinion on the words “AI safety”) in favor of… not exactly sure what? Protecting tribal instincts?
In this case the quoting feels quite adequate to me, since the quotes are not necessarily endorsed, but examined as a phenomenon in the world, and its implications.
Okay, this was enough meta for me today.
As part of doing anything interesting in the world and saying it out loud on the internet, lots of people on the internet will spout text about you, and I think it’s not interesting or worthwhile to read.
Feynman asks “What do you care what other people think?” which I extend here to “Why do you care to seek out and read what other people think?”
I have a theory that, essentially, all real thinking on the internet gets done in essay form, and anything that is not the form of an essay does not contain real or original thinking, and should rise to a very high bar before its worth engaging with e.g. social media, a lot of scientific papers. For instance, anyone who tweets anything I find genuinely interesting, also writes essays (Paul Graham, Eliezer Yudkowsky, Aella, Venkatesh Rao, and so on).
I have difficulty imagining a world where public discourse on the internet matters AND the people engaging with it aren’t having a spout of bad content written about them. The fact that people are spouting negative content about AI safety is not surprising, and in my experience their ideas are of little worth (with the exception of people who write essays).
And of course, many actions that I think might improve the world, are outside of the overton window. Suppose I want to discuss them with other thoughtful LessWrongers. Should I not do so because it will cause people to spout negative text about us, or should I do so and avoid caring about the negativity? I deem it to be the latter.
Thanks for the detailed response, I really appreciate it! For the future I’ll see if I can link to more essays (over social media posts) when giving evidence about potentially important outside opinions. I’m going offline in a few minutes, but will try to add some links here as well when I get back on Sunday.
As for the importance of outside opinions that aren’t in essay form, I fully agree with you that some amount of critique is inevitable if you are doing good, impactful work. I also agree we should not alter our semi-private conversations on LessWrong and elsewhere to accommodate (bad-faith) critics. Things are different, however, when you are releasing a public-facing product, and talking about questionably defined “AI ethics” in a literal press release. There, everything is about perception, and you should expect people to be influenced heavily by your wording (if your PR folks are doing their jobs right 🙃).
Why should we care about the non-essay-writing-public? Well, one good reason is politics. I don’t know what your take is on AI governance, but a significant (essay-writing) portion of this community believes it to be important. In order to do effective work there, we will need to be in a position where politicians and business leaders in tech can work with us with minimal friction. If there is one thing politicians (and to a lesser degree some corporations) care about, it is general public perception, and while they are generally fine with very small minority pushback, if the general vibe in Silicon valley becomes “AI ethicists are mainly partisan, paternalistic censors,” then there becomes a very strong incentive not to work with us.
Unfortunately, I believe that the above vibe has been growing both on and offline as a result of actions which members of this community have had some amount of control over. We shouldn’t bend over backwards to accommodate critics, but if we can make our own jobs easier by, say, better communicating our goals in our public-facing work, why not do that?
I didn’t do this, and LessWrong didn’t do this.
To be clear, as a rule I’m just not reading it if it’s got social media screenshots about LW discussion, unless the social media author is someone who also writes good and original essays online.
I don’t want LessWrong to be a cudgel in a popularity contest, and you responding to my comment by saying you’ll aim to give higher quality PR advice in the future, is missing my point.
Citation needed? Anyway, my take is that using LW’s reputation in a popularity tug-of-war is a waste of our reputation. Plus you’ll lose.
Just give up on that. You will not get far with that.
I don’t know why you are identifying “ML developers” with “LessWrong users”, the two groups are not much overlapping.
This mistake is perhaps what leads you, in the OP, to not only give PR advice, but to give tactical advice on how to get censorship past people without them noticing, which seems unethical to me. In contrast I would encourage making your censorship blatant so that people know that they can trust you to not be getting one over on them when you speak.
I’m not trying to be wholly critical, I do have admiration for many things in your artistic and written works, but reading this post, I suggest doing a halt, melt, catch fire, and finding a new way to try to help out with the civilizational ruin coming our way from AI. I want LessWrong to be a place of truth and wisdom, I never want LessWrong to be a place where you can go to get tactical advice on how to get censorship past people to comply with broad political pressures in the populace.
I mostly agree with what you wrote (especially the “all real thinking on the internet gets done in essay form” is very interesting, though I might push back against that a bit and point to really good comments, podcasts & forecasting platforms). I do endorse the negative sentiment around privately owned social media companies (as in me wishing them to burn in hell) for any purpose other than the most inane shitposting, and would prefer everyone interested in making intellectual progress to abandon them (yes, that also includes substacks).
Ahem.
I guess you approach the tweets by judging whether their content is useful to engage with qua content (“is it true or interesting what those people are saying?”, which, I agree with you, is not the case), as opposed to approaching it sociologically (“what does the things those people are saying predict about how they will vote & especially act in the future?”). Similarly, I might not care about how a barometer works, but I’d still want to use it to predict storms (I, in fact, care about knowing how barometers work, and just spend 15 minutes reading the Wikipedia article). The latter still strikes me as important, though I get the “ick” and “ugh” reaction against engaging in public relations, and I’m happy I’m obscure enough to not have to bother about it. But in the unlikely case a big newspaper would run a huge smear campaign against me, I’d want to know!
And then think hard about next steps: maybe hiring public relations people to deal with it? Or gracefully responding with a public clarification?