It’s a thoughtful post and all, but for the record, I’m not that interested in what people on twitter say about me and my allies, and am always a bit sad to see people quoting folks on twitter and posting it on LessWrong.
I understand not being interested in hearing negative outsider takes, but may I ask why it makes you sad to see negative quotes from Twitter here? For some context as to why I included those tweets, the worldview I’m coming from is one where outside perception can strongly affect our ability to carry out future plans (in governance, getting people to agree to shared safety standards, etc.), and as such it seems worth paying attention to the state of adversarial discourse in influential circles (especially when we can practically effect that discourse). If there’s good reason not to specifically quote from twitter, however, I’d be happy to remove it/relegate to footnotes/use different sources.
Random samples are valuable, even if small, the first data point carries the highest amount of information. Public opinion matters to some degree, I believe it matters a lot, and Twitter is a widely used platform, so it is decently representative of the public opinion on something (at least more representative than Lesswrong).
If you want to give a good survey of public opinion on Twitter, you likely should choose tweets that are highly upvoted. All of the tweets the OP cited have less than 1000 upvotes. Is that an amount of likes that suggest that it’s decently representative of the public opinion?
For each tweet the post found arguing their point, I can find two arguing the opposite. Yes, in theory tweets are data points, but in practice the author just uses them to confirm his already held beliefs.
Random samples, among a representative population, are valuable. It seems unlikely that Twitter is representative of the general population, more likely it is only representative of a subset.
I have weak-downvoted this comment. I don’t know what generated it, but from the outside it looks to me like ignoring a very important aspect of reality (public opinion on the words “AI safety”) in favor of… not exactly sure what? Protecting tribal instincts?
In this case the quoting feels quite adequate to me, since the quotes are not necessarily endorsed, but examined as a phenomenon in the world, and its implications.
As part of doing anything interesting in the world and saying it out loud on the internet, lots of people on the internet will spout text about you, and I think it’s not interesting or worthwhile to read.
Feynman asks “What do you care what other people think?” which I extend here to “Why do you care to seek out and read what other people think?”
I have a theory that, essentially, all real thinking on the internet gets done in essay form, and anything that is not the form of an essay does not contain real or original thinking, and should rise to a very high bar before its worth engaging with e.g. social media, a lot of scientific papers. For instance, anyone who tweets anything I find genuinely interesting, also writes essays (Paul Graham, Eliezer Yudkowsky, Aella, Venkatesh Rao, and so on).
I have difficulty imagining a world where public discourse on the internet matters AND the people engaging with it aren’t having a spout of bad content written about them. The fact that people are spouting negative content about AI safety is not surprising, and in my experience their ideas are of little worth (with the exception of people who write essays).
And of course, many actions that I think might improve the world, are outside of the overton window. Suppose I want to discuss them with other thoughtful LessWrongers. Should I not do so because it will cause people to spout negative text about us, or should I do so and avoid caring about the negativity? I deem it to be the latter.
Thanks for the detailed response, I really appreciate it! For the future I’ll see if I can link to more essays (over social media posts) when giving evidence about potentially important outside opinions. I’m going offline in a few minutes, but will try to add some links here as well when I get back on Sunday.
As for the importance of outside opinions that aren’t in essay form, I fully agree with you that some amount of critique is inevitable if you are doing good, impactful work. I also agree we should not alter our semi-private conversations on LessWrong and elsewhere to accommodate (bad-faith) critics. Things are different, however, when you are releasing a public-facing product, and talking about questionably defined “AI ethics” in a literal press release. There, everything is about perception, and you should expect people to be influenced heavily by your wording (if your PR folks are doing their jobs right 🙃).
Why should we care about the non-essay-writing-public? Well, one good reason is politics. I don’t know what your take is on AI governance, but a significant (essay-writing) portion of this community believes it to be important. In order to do effective work there, we will need to be in a position where politicians and business leaders in tech can work with us with minimal friction. If there is one thing politicians (and to a lesser degree some corporations) care about, it is general public perception, and while they are generally fine with very small minority pushback, if the general vibe in Silicon valley becomes “AI ethicists are mainly partisan, paternalistic censors,” then there becomes a very strong incentive not to work with us.
Unfortunately, I believe that the above vibe has been growing both on and offline as a result of actions which members of this community have had some amount of control over. We shouldn’t bend over backwards to accommodate critics, but if we can make our own jobs easier by, say, better communicating our goals in our public-facing work, why not do that?
Things are different, however, when you are releasing a public-facing product, and talking about questionably defined “AI ethics” in a literal press release.
I didn’t do this, and LessWrong didn’t do this.
For the future I’ll see if I can link to more essays (over social media posts) when giving evidence about potentially important outside opinions.
To be clear, as a rule I’m just not reading it if it’s got social media screenshots about LW discussion, unless the social media author is someone who also writes good and original essays online.
I don’t want LessWrong to be a cudgel in a popularity contest, and you responding to my comment by saying you’ll aim to give higher quality PR advice in the future, is missing my point.
I don’t know what your take is on AI governance, but a significant (essay-writing) portion of this community believes it to be important.
Citation needed? Anyway, my take is that using LW’s reputation in a popularity tug-of-war is a waste of our reputation. Plus you’ll lose.
In order to do effective work there, we will need to be in a position where politicians and business leaders in tech can work with us with minimal friction.
Just give up on that. You will not get far with that.
We shouldn’t bend over backwards to accommodate critics, but if we can make our own jobs easier by, say, better communicating our goals in our public-facing work, why not do that?
I don’t know why you are identifying “ML developers” with “LessWrong users”, the two groups are not much overlapping.
This mistake is perhaps what leads you, in the OP, to not only give PR advice, but to give tactical advice on how to get censorship past people without them noticing, which seems unethical to me. In contrast I would encourage making your censorship blatant so that people know that they can trust you to not be getting one over on them when you speak.
I’m not trying to be wholly critical, I do have admiration for many things in your artistic and written works, but reading this post, I suggest doing a halt, melt, catch fire, and finding a new way to try to help out with the civilizational ruin coming our way from AI. I want LessWrong to be a place of truth and wisdom, I never want LessWrong to be a place where you can go to get tactical advice on how to get censorship past people to comply with broad political pressures in the populace.
I mostly agree with what you wrote (especially the “all real thinking on the internet gets done in essay form” is very interesting, though I might push back against that a bit and point to really good comments, podcasts & forecasting platforms). I do endorse the negative sentiment around privately owned social media companies (as in me wishing them to burn in hell) for any purpose other than the most inane shitposting, and would prefer everyone interested in making intellectual progress to abandon them (yes, that also includes substacks).
Ahem.
I guess you approach the tweets by judging whether their content is useful to engage with qua content (“is it true or interesting what those people are saying?”, which, I agree with you, is not the case), as opposed to approaching it sociologically (“what does the things those people are saying predict about how they will vote & especially act in the future?”). Similarly, I might not care about how a barometer works, but I’d still want to use it to predict storms (I, in fact, care about knowing how barometers work, and just spend 15 minutes reading the Wikipedia article). The latter still strikes me as important, though I get the “ick” and “ugh” reaction against engaging in public relations, and I’m happy I’m obscure enough to not have to bother about it. But in the unlikely case a big newspaper would run a huge smear campaign against me, I’d want to know!
And then think hard about next steps: maybe hiring public relations people to deal with it? Or gracefully responding with a public clarification?
I think you might be underestimating Twitter’s role in civilizational thinking.
Like, if I decide I’m going to train for a marathon, in some sense I don’t care what a random thought like “I’m tired and don’t wanna keep running” has to say. The answer is “Nah.”
But it’s also pretty damn important that I notice and account for the thought if I want to keep training.
I actually just got back from exercising. While I was there, I noticed I’d built up an anticipation of pain from keeping going. Now, I do want to keep going longer than I did today. But I also want that part of my mind to feel it can have control/agency over my choices, so I happily wrapped up after ~30 mins, and walked home. Next time I’ll probably feel more comfortable going longer.
But anyway, I’m not seeing the analogy. (Also it’s hard to argue with analogies, I find myself getting lost in hypotheticals all day.)
I don’t respect Twitter anywhere near as much as I respect the part of me that is resistant to physical pain. The relevant part of me that fears physical pain feels like a much more respectable negotiation partner; it cares about something I roughly see as valuable, and I expect I can get it what it wants whilst also getting what I care about (as much physical ease and movement as I desire).
I have a great disrespect for Twitter; it wants to eat all of my thoughts and ideas for its content-creation machine and transform them into their most misinterpreted form, and in return will give me a lot of attention. I care little about attention on the current margin and care a lot about not having to optimize against the forces of misinterpretation.
I’d be interested in reading an argument about how Twitter plays a useful role in civilizational cognition, with the hypothesis to beat being “it’s a mess of symbols and simulacra that is primarily (and almost solely) a force for distraction and destruction”.
I’m not suggesting you remove it from your map of the world, it’s a very key part in understanding various bits of degeneration and adversarial forces. I’m suggesting that giving the arguments and positions that rise there much object-level consideration is a grave distraction, and caring about what people say about you there is kind of gross.
The difference is that I can’t shut down my own internal monologue/suppress my own internal subagents, and I can just choose to Not Read Twitter and, further, Not Post What People On Twitter Say. Which is what I generally choose to do.
Though the analog here would be whether civilization can Not Read Twitter and Not Post What People On Twitter Say. I think civilization has about as much difficulty with that as you or I do with shutting down our respective internal monologues.
I also agree that I am less able to get out of a negotiation with the part of me that is resistant to physical pain, whereas it seems way more doable to me to have massive positive influence on the world without having to care very much about the details of what people write about you on Twitter.
It’s a thoughtful post and all, but for the record, I’m not that interested in what people on twitter say about me and my allies, and am always a bit sad to see people quoting folks on twitter and posting it on LessWrong.
I understand not being interested in hearing negative outsider takes, but may I ask why it makes you sad to see negative quotes from Twitter here? For some context as to why I included those tweets, the worldview I’m coming from is one where outside perception can strongly affect our ability to carry out future plans (in governance, getting people to agree to shared safety standards, etc.), and as such it seems worth paying attention to the state of adversarial discourse in influential circles (especially when we can practically effect that discourse). If there’s good reason not to specifically quote from twitter, however, I’d be happy to remove it/relegate to footnotes/use different sources.
Sad and uninteresting seem related to me? It seems solely a distraction, so to read LWers focusing serious attention on a distraction is sad.
See my reply to niplav for my perspective.
It sounds like your trying to convince readers that random potshots on Twitter are serious opinions?
If so, this seems a bit absurd, as if readers can’t tell by themselves when someone’s opinion is worth their attention.
Random samples are valuable, even if small, the first data point carries the highest amount of information. Public opinion matters to some degree, I believe it matters a lot, and Twitter is a widely used platform, so it is decently representative of the public opinion on something (at least more representative than Lesswrong).
If you want to give a good survey of public opinion on Twitter, you likely should choose tweets that are highly upvoted. All of the tweets the OP cited have less than 1000 upvotes. Is that an amount of likes that suggest that it’s decently representative of the public opinion?
For each tweet the post found arguing their point, I can find two arguing the opposite. Yes, in theory tweets are data points, but in practice the author just uses them to confirm his already held beliefs.
Random samples, among a representative population, are valuable. It seems unlikely that Twitter is representative of the general population, more likely it is only representative of a subset.
I have weak-downvoted this comment. I don’t know what generated it, but from the outside it looks to me like ignoring a very important aspect of reality (public opinion on the words “AI safety”) in favor of… not exactly sure what? Protecting tribal instincts?
In this case the quoting feels quite adequate to me, since the quotes are not necessarily endorsed, but examined as a phenomenon in the world, and its implications.
Okay, this was enough meta for me today.
As part of doing anything interesting in the world and saying it out loud on the internet, lots of people on the internet will spout text about you, and I think it’s not interesting or worthwhile to read.
Feynman asks “What do you care what other people think?” which I extend here to “Why do you care to seek out and read what other people think?”
I have a theory that, essentially, all real thinking on the internet gets done in essay form, and anything that is not the form of an essay does not contain real or original thinking, and should rise to a very high bar before its worth engaging with e.g. social media, a lot of scientific papers. For instance, anyone who tweets anything I find genuinely interesting, also writes essays (Paul Graham, Eliezer Yudkowsky, Aella, Venkatesh Rao, and so on).
I have difficulty imagining a world where public discourse on the internet matters AND the people engaging with it aren’t having a spout of bad content written about them. The fact that people are spouting negative content about AI safety is not surprising, and in my experience their ideas are of little worth (with the exception of people who write essays).
And of course, many actions that I think might improve the world, are outside of the overton window. Suppose I want to discuss them with other thoughtful LessWrongers. Should I not do so because it will cause people to spout negative text about us, or should I do so and avoid caring about the negativity? I deem it to be the latter.
Thanks for the detailed response, I really appreciate it! For the future I’ll see if I can link to more essays (over social media posts) when giving evidence about potentially important outside opinions. I’m going offline in a few minutes, but will try to add some links here as well when I get back on Sunday.
As for the importance of outside opinions that aren’t in essay form, I fully agree with you that some amount of critique is inevitable if you are doing good, impactful work. I also agree we should not alter our semi-private conversations on LessWrong and elsewhere to accommodate (bad-faith) critics. Things are different, however, when you are releasing a public-facing product, and talking about questionably defined “AI ethics” in a literal press release. There, everything is about perception, and you should expect people to be influenced heavily by your wording (if your PR folks are doing their jobs right 🙃).
Why should we care about the non-essay-writing-public? Well, one good reason is politics. I don’t know what your take is on AI governance, but a significant (essay-writing) portion of this community believes it to be important. In order to do effective work there, we will need to be in a position where politicians and business leaders in tech can work with us with minimal friction. If there is one thing politicians (and to a lesser degree some corporations) care about, it is general public perception, and while they are generally fine with very small minority pushback, if the general vibe in Silicon valley becomes “AI ethicists are mainly partisan, paternalistic censors,” then there becomes a very strong incentive not to work with us.
Unfortunately, I believe that the above vibe has been growing both on and offline as a result of actions which members of this community have had some amount of control over. We shouldn’t bend over backwards to accommodate critics, but if we can make our own jobs easier by, say, better communicating our goals in our public-facing work, why not do that?
I didn’t do this, and LessWrong didn’t do this.
To be clear, as a rule I’m just not reading it if it’s got social media screenshots about LW discussion, unless the social media author is someone who also writes good and original essays online.
I don’t want LessWrong to be a cudgel in a popularity contest, and you responding to my comment by saying you’ll aim to give higher quality PR advice in the future, is missing my point.
Citation needed? Anyway, my take is that using LW’s reputation in a popularity tug-of-war is a waste of our reputation. Plus you’ll lose.
Just give up on that. You will not get far with that.
I don’t know why you are identifying “ML developers” with “LessWrong users”, the two groups are not much overlapping.
This mistake is perhaps what leads you, in the OP, to not only give PR advice, but to give tactical advice on how to get censorship past people without them noticing, which seems unethical to me. In contrast I would encourage making your censorship blatant so that people know that they can trust you to not be getting one over on them when you speak.
I’m not trying to be wholly critical, I do have admiration for many things in your artistic and written works, but reading this post, I suggest doing a halt, melt, catch fire, and finding a new way to try to help out with the civilizational ruin coming our way from AI. I want LessWrong to be a place of truth and wisdom, I never want LessWrong to be a place where you can go to get tactical advice on how to get censorship past people to comply with broad political pressures in the populace.
I mostly agree with what you wrote (especially the “all real thinking on the internet gets done in essay form” is very interesting, though I might push back against that a bit and point to really good comments, podcasts & forecasting platforms). I do endorse the negative sentiment around privately owned social media companies (as in me wishing them to burn in hell) for any purpose other than the most inane shitposting, and would prefer everyone interested in making intellectual progress to abandon them (yes, that also includes substacks).
Ahem.
I guess you approach the tweets by judging whether their content is useful to engage with qua content (“is it true or interesting what those people are saying?”, which, I agree with you, is not the case), as opposed to approaching it sociologically (“what does the things those people are saying predict about how they will vote & especially act in the future?”). Similarly, I might not care about how a barometer works, but I’d still want to use it to predict storms (I, in fact, care about knowing how barometers work, and just spend 15 minutes reading the Wikipedia article). The latter still strikes me as important, though I get the “ick” and “ugh” reaction against engaging in public relations, and I’m happy I’m obscure enough to not have to bother about it. But in the unlikely case a big newspaper would run a huge smear campaign against me, I’d want to know!
And then think hard about next steps: maybe hiring public relations people to deal with it? Or gracefully responding with a public clarification?
I think you might be underestimating Twitter’s role in civilizational thinking.
Like, if I decide I’m going to train for a marathon, in some sense I don’t care what a random thought like “I’m tired and don’t wanna keep running” has to say. The answer is “Nah.”
But it’s also pretty damn important that I notice and account for the thought if I want to keep training.
I actually just got back from exercising. While I was there, I noticed I’d built up an anticipation of pain from keeping going. Now, I do want to keep going longer than I did today. But I also want that part of my mind to feel it can have control/agency over my choices, so I happily wrapped up after ~30 mins, and walked home. Next time I’ll probably feel more comfortable going longer.
But anyway, I’m not seeing the analogy. (Also it’s hard to argue with analogies, I find myself getting lost in hypotheticals all day.)
I don’t respect Twitter anywhere near as much as I respect the part of me that is resistant to physical pain. The relevant part of me that fears physical pain feels like a much more respectable negotiation partner; it cares about something I roughly see as valuable, and I expect I can get it what it wants whilst also getting what I care about (as much physical ease and movement as I desire).
I have a great disrespect for Twitter; it wants to eat all of my thoughts and ideas for its content-creation machine and transform them into their most misinterpreted form, and in return will give me a lot of attention. I care little about attention on the current margin and care a lot about not having to optimize against the forces of misinterpretation.
I’d be interested in reading an argument about how Twitter plays a useful role in civilizational cognition, with the hypothesis to beat being “it’s a mess of symbols and simulacra that is primarily (and almost solely) a force for distraction and destruction”.
I’m not suggesting you remove it from your map of the world, it’s a very key part in understanding various bits of degeneration and adversarial forces. I’m suggesting that giving the arguments and positions that rise there much object-level consideration is a grave distraction, and caring about what people say about you there is kind of gross.
The difference is that I can’t shut down my own internal monologue/suppress my own internal subagents, and I can just choose to Not Read Twitter and, further, Not Post What People On Twitter Say. Which is what I generally choose to do.
That seems like a fine choice,
Though the analog here would be whether civilization can Not Read Twitter and Not Post What People On Twitter Say. I think civilization has about as much difficulty with that as you or I do with shutting down our respective internal monologues.
I also agree that I am less able to get out of a negotiation with the part of me that is resistant to physical pain, whereas it seems way more doable to me to have massive positive influence on the world without having to care very much about the details of what people write about you on Twitter.