Yea, having similar feelings about this post. The conclusion is probably still correct, but not sufficiently established. And I think there should be, idk, a norm about being more thorough when talking badly about an org, and violating that doesn’t seem worth the point made here.
I see it as causally connected to why the Lesswrong community has supported three orgs racing to AGI.
Out of the following, which of them would count as “talking badly about an org” and would a norm of being more thorough before?
“Greenpeace has tied its identity to anti-nuke, and if you’re pro-nuke you’ll be fighting them for as long as they exist”
“If you are for nuke and market solutions, you’ll find Greenpeace has taken consistently terrible stances”
“If you are for nuke and market solutions, every dollar Greenpeace gets is a loss for you”
“Greenpeace is an enemy, but specifically not stupid or evil”
“Strong supporters of Greenpeace will purposefully slow down nuclear energy, technological solutions and market mechanisms”
If the above passes your threshold for “need to be more thorough before saying it”, then that informs what a potential follow-up to my article geared toward Lesswrong would have to be about.
Specifically, it should be about Lesswrong having a bad culture. One that favours norms that make punishing enemies harder, up to the point of not being able to straightforwardly say “if you are pro-nuke, an org that has been anti-nuke for decades is your enemy”. Let alone dealing with AI corporations racing to AGI that have friends in the community.
If the above doesn’t pass your threshold and you think it’s fine, then I don’t think it makes sense for me to write a follow-up article to Lesswrong. It was basically as far as my article goes IIRC, and so the problem lies deeper.
So I think the norm is something like “if you write something that will predictably make people feel worse about [real person or org], you should stick to journalistic standards of citing sources and such”. That means all your quotes depend on whether you’ve sufficiently established the substance of the quote.
If we take your post as it is now, well you only have one source, which is the group letter to congress. Imo as you used it this actually does not even establish that they’re anti nuclear power because the letter is primarily about fossil fuels, and the quote about nuclear power is in the context of protecting indigenous rights. Also you said it was signed with 600 other companies, so it might have been a compromise (maybe they oppose some parts of the content but thought the entire thing was still worth signing). An endorsement of a compromise/package is just really not a good way to establish their position. It would be much better to just look at the Wikipedia page and see whether that says they’re anti nuclear. Which in fact it does in the introduction. Some would probably quibble with that but for me that would actually be enough. So if you just did that, then I’d excuse all quotes that only reference them being anti-nuclear power (which I guess is just the first in your list).
Saying that they’re my enemy is a little harder because it would require establishing that they’re a negative for climate protection on net. This is not obvious; you could have an org that’s anti nuclear power and still does more good than harm overall. It probably still wouldn’t be that difficult, but your post as is certainly falls short. (And BTW it’s also not obvious that being anti nuclear power now is as bad as having been anti nuclear power historically. It could be the case that having been anti nuclear power historically was a huge mistake and we should have invested in the technology all this time, but that since we didn’t, at this point it actually no longer makes sense and we should only invest in renewables. I don’t think that’s the case, I think we should probably still build nuclear reactors now, but I’m genuinely not sure. This kind of thing very much matters for the ‘net negative impact’ question.)
Specifically, it should be about Lesswrong having a bad culture. One that favours norms that make punishing enemies harder, up to the point of not being able to straightforwardly say “if you are pro-nuke, an org that has been anti-nuke for decades is your enemy”.
I think it’s very unlikely that having laxer standards for accusing others is a good thing. Broadly speaking it seems to me that ~100% of groups-that-argue-about-political-or-culture-war-topics suffer from having too low standards for criticizing the outgroup, and ~0% suffer from having too high standards. And I don’t think these standards are even that high, like you could write a post that says Greenpeace is my enemy, you’d just have to put in the effort to source your claims a little. Or, more practically, you could have just written the post about a fictional org, then you can make your point about enemies without having to deal with the practical side of attacking a real org.
Not related but
why the Lesswrong community has supported three orgs racing to AGI.
This was not my impression. My impression was that people associated with the community have founded orgs that then did capability research, but that many, probably most, people on LW think that’s a disaster. To varying degrees. People are probably less negative on Anthropic than OpenAI. We’re certainly not enthusiastic about OpenAI.. In any case I don’t think it summarizes to “the Lesswrong community has supported” these orgs.
“if you write something that will predictably make people feel worse about [real person or org], you should stick to journalistic standards of citing sources and such”
This is a selective demand for rigour, which induces an extremely strong positivity bias when discussing other people. I would not willingly introduce such a strong bias.
I think other norms make sense, and do not lead to entire communities distorting their vision of the social world. Cordiality, politeness, courtesy and the like.
I think it’s very unlikely that having laxer standards for accusing others is a good thing.
I know you think so. And I disagree, especially on “~0% suffer from having too high standards” (my immediate reaction is that you are obviously rejecting the relevant evidence when you say this).
This is why I am thinking of having an article specifically about this, specifically tailored to Lesswrong.
To varying degrees. People are probably less negative on Anthropic than OpenAI. We’re certainly not enthusiastic about OpenAI.. In any case I don’t think it summarizes to “the Lesswrong community has supported” these orgs.
Have you read the most upvoted responses to your link?
The conclusion of the second most voted one, from Ben Pace, is “Overall I don’t feel my opinion is very robust, and could easily change.”, and “And of course I’m very happy indeed about a bunch of the safety work they do and support. The org give lots of support and engineers to people like Paul Christiano, Chris Olah, etc”. For reference, Paul Christiano’s “safety work” included RLHF, which was instrumental to ChatGPT.
From my point of view, you are painfully wrong about this, and indeed, Lesswrong should have had much more enmity toward OpenAI, instead of recommending people work there because of safety.
Have you read the most upvoted responses to your link?
yes. I don’t think any of them suggest that LessWrong is supporting or enthusiastic about OpenAI. (In particular, whether you should work there doesn’t have much relation to whether the company as a whole is a net negative.) I would describe the stance of top 2 comments on that post as mixed[1] and of LW’s stance in general as mixed-to-negative.
Lesswrong should have had much more enmity toward OpenAI,
Fwiw this is not a crux, I might agree that we should be more negative toward OpenAI than we are. I don’t think that’s an argument for laxer standards of critcism. Standards for rigor should lead toward higher quality criticism, not less harsh criticism. If you had attacked Greenpeace twice as much but had substantiated all your claims, I wouldn’t have downvoted the post. I’d guess that the net effectiveness of a community’s criticism of a person or org goes up with stricter norms.
e.g., Ben pace also says, “An obvious reason to think OpenAI’s impact will be net negative is that they seem to be trying to reach AGI as fast as possible, and trying a route different from DeepMind and other competitors, so are in some world shortening the timeline until AI. (I’m aware that there are arguments about why a shorter timeline is better, but I’m not sold on them right now.)”
By the way, tone doesn’t come across well in writing. To be fair, even orally, I am often a bit abrasive.
So just to be clear: I’m thankful that you’re engaging with the conversation. Furthermore, I am assuming that you are doing so genuinely, so thanks for that too.
yes. I don’t think any of them suggest that LessWrong is supporting or enthusiastic about OpenAI
I think you may have misread what I wrote.
My statements were that the LessWrong community has supported DeepMind, OpenAI and Anthropic, and that it had friends in all three companies.
I did not state that it was enthusiastic about it, and much less so that it currently is. When I say “has supported”, I literally mean that it has supported them. Eliezer introducing Demis and Thiel, Paul Christiano doing RLHF at OpenAI and helping with ChatGPT, the whole cluster founding Anthropic, all the people safety-washing the companies, etc. I didn’t make a grand statement about its feelings, just a pragmatic one about some of its actions.
Nevertheless a reaction to my statements, you picked up a thread the top answer recommends people work at OpenAI, and where the second topmost answer expresses happiness at capabilities (Paul’s RLHF) work.
How could he have known that Paul’s work would lead to capabilities 2 years before ChatGPT? By using enmity and keeping in mind that an organisation that races to AGI will leverage all of its internal research (including the one labelled “safety”) for capabilities.
I don’t know how you did footnotes in comments, but...
For instance, the context of Ben Pace’s response was one when many people in the community at the time (plausibly himself too!) recommended people work at OpenAI’s safety teams.
From my point of view, this is pretty damning. You picked one post, and the topmost answers featured two examples of support. The type that you would naturally and should clearly avoid with enemies.
To be clear, the LessWrong community has supported many times DeepMind, OpenAI and Anthropic, and at the same time, felt bad feelings about them too. This is quite a normal awkward situation in the absence of clear enmity.
This is not surprising. Enmity would have helped with clarifying this relationship and not committing this mistake.
Also, remember that I do not view enmity as a single-dimensional axis, and this is a major point of my thesis! My recommendation sums to: be more proactive in deeming others enemies, and at the same time, remain cordial, polite and professional with them.
Yea, having similar feelings about this post. The conclusion is probably still correct, but not sufficiently established. And I think there should be, idk, a norm about being more thorough when talking badly about an org, and violating that doesn’t seem worth the point made here.
I am genuinely interested in your point of view.
I see it as causally connected to why the Lesswrong community has supported three orgs racing to AGI.
Out of the following, which of them would count as “talking badly about an org” and would a norm of being more thorough before?
“Greenpeace has tied its identity to anti-nuke, and if you’re pro-nuke you’ll be fighting them for as long as they exist”
“If you are for nuke and market solutions, you’ll find Greenpeace has taken consistently terrible stances”
“If you are for nuke and market solutions, every dollar Greenpeace gets is a loss for you”
“Greenpeace is an enemy, but specifically not stupid or evil”
“Strong supporters of Greenpeace will purposefully slow down nuclear energy, technological solutions and market mechanisms”
If the above passes your threshold for “need to be more thorough before saying it”, then that informs what a potential follow-up to my article geared toward Lesswrong would have to be about.
Specifically, it should be about Lesswrong having a bad culture. One that favours norms that make punishing enemies harder, up to the point of not being able to straightforwardly say “if you are pro-nuke, an org that has been anti-nuke for decades is your enemy”. Let alone dealing with AI corporations racing to AGI that have friends in the community.
If the above doesn’t pass your threshold and you think it’s fine, then I don’t think it makes sense for me to write a follow-up article to Lesswrong. It was basically as far as my article goes IIRC, and so the problem lies deeper.
So I think the norm is something like “if you write something that will predictably make people feel worse about [real person or org], you should stick to journalistic standards of citing sources and such”. That means all your quotes depend on whether you’ve sufficiently established the substance of the quote.
If we take your post as it is now, well you only have one source, which is the group letter to congress. Imo as you used it this actually does not even establish that they’re anti nuclear power because the letter is primarily about fossil fuels, and the quote about nuclear power is in the context of protecting indigenous rights. Also you said it was signed with 600 other companies, so it might have been a compromise (maybe they oppose some parts of the content but thought the entire thing was still worth signing). An endorsement of a compromise/package is just really not a good way to establish their position. It would be much better to just look at the Wikipedia page and see whether that says they’re anti nuclear. Which in fact it does in the introduction. Some would probably quibble with that but for me that would actually be enough. So if you just did that, then I’d excuse all quotes that only reference them being anti-nuclear power (which I guess is just the first in your list).
Saying that they’re my enemy is a little harder because it would require establishing that they’re a negative for climate protection on net. This is not obvious; you could have an org that’s anti nuclear power and still does more good than harm overall. It probably still wouldn’t be that difficult, but your post as is certainly falls short. (And BTW it’s also not obvious that being anti nuclear power now is as bad as having been anti nuclear power historically. It could be the case that having been anti nuclear power historically was a huge mistake and we should have invested in the technology all this time, but that since we didn’t, at this point it actually no longer makes sense and we should only invest in renewables. I don’t think that’s the case, I think we should probably still build nuclear reactors now, but I’m genuinely not sure. This kind of thing very much matters for the ‘net negative impact’ question.)
I think it’s very unlikely that having laxer standards for accusing others is a good thing. Broadly speaking it seems to me that ~100% of groups-that-argue-about-political-or-culture-war-topics suffer from having too low standards for criticizing the outgroup, and ~0% suffer from having too high standards. And I don’t think these standards are even that high, like you could write a post that says Greenpeace is my enemy, you’d just have to put in the effort to source your claims a little. Or, more practically, you could have just written the post about a fictional org, then you can make your point about enemies without having to deal with the practical side of attacking a real org.
Not related but
This was not my impression. My impression was that people associated with the community have founded orgs that then did capability research, but that many, probably most, people on LW think that’s a disaster. To varying degrees. People are probably less negative on Anthropic than OpenAI. We’re certainly not enthusiastic about OpenAI.. In any case I don’t think it summarizes to “the Lesswrong community has supported” these orgs.
This is a selective demand for rigour, which induces an extremely strong positivity bias when discussing other people. I would not willingly introduce such a strong bias.
I think other norms make sense, and do not lead to entire communities distorting their vision of the social world. Cordiality, politeness, courtesy and the like.
I know you think so. And I disagree, especially on “~0% suffer from having too high standards” (my immediate reaction is that you are obviously rejecting the relevant evidence when you say this).
This is why I am thinking of having an article specifically about this, specifically tailored to Lesswrong.
Have you read the most upvoted responses to your link?
Its conclusion is “I think people who take safety seriously should consider working at OpenAI” (with the link to its job page!)
The conclusion of the second most voted one, from Ben Pace, is “Overall I don’t feel my opinion is very robust, and could easily change.”, and “And of course I’m very happy indeed about a bunch of the safety work they do and support. The org give lots of support and engineers to people like Paul Christiano, Chris Olah, etc”. For reference, Paul Christiano’s “safety work” included RLHF, which was instrumental to ChatGPT.
From my point of view, you are painfully wrong about this, and indeed, Lesswrong should have had much more enmity toward OpenAI, instead of recommending people work there because of safety.
yes. I don’t think any of them suggest that LessWrong is supporting or enthusiastic about OpenAI. (In particular, whether you should work there doesn’t have much relation to whether the company as a whole is a net negative.) I would describe the stance of top 2 comments on that post as mixed [1] and of LW’s stance in general as mixed-to-negative.
Fwiw this is not a crux, I might agree that we should be more negative toward OpenAI than we are. I don’t think that’s an argument for laxer standards of critcism. Standards for rigor should lead toward higher quality criticism, not less harsh criticism. If you had attacked Greenpeace twice as much but had substantiated all your claims, I wouldn’t have downvoted the post. I’d guess that the net effectiveness of a community’s criticism of a person or org goes up with stricter norms.
e.g., Ben pace also says, “An obvious reason to think OpenAI’s impact will be net negative is that they seem to be trying to reach AGI as fast as possible, and trying a route different from DeepMind and other competitors, so are in some world shortening the timeline until AI. (I’m aware that there are arguments about why a shorter timeline is better, but I’m not sold on them right now.)”
By the way, tone doesn’t come across well in writing. To be fair, even orally, I am often a bit abrasive.
So just to be clear: I’m thankful that you’re engaging with the conversation. Furthermore, I am assuming that you are doing so genuinely, so thanks for that too.
I think you may have misread what I wrote.
My statements were that the LessWrong community has supported DeepMind, OpenAI and Anthropic, and that it had friends in all three companies.
I did not state that it was enthusiastic about it, and much less so that it currently is. When I say “has supported”, I literally mean that it has supported them. Eliezer introducing Demis and Thiel, Paul Christiano doing RLHF at OpenAI and helping with ChatGPT, the whole cluster founding Anthropic, all the people safety-washing the companies, etc. I didn’t make a grand statement about its feelings, just a pragmatic one about some of its actions.
Nevertheless a reaction to my statements, you picked up a thread the top answer recommends people work at OpenAI, and where the second topmost answer expresses happiness at capabilities (Paul’s RLHF) work.
How could he have known that Paul’s work would lead to capabilities 2 years before ChatGPT? By using enmity and keeping in mind that an organisation that races to AGI will leverage all of its internal research (including the one labelled “safety”) for capabilities.
I don’t know how you did footnotes in comments, but...
For instance, the context of Ben Pace’s response was one when many people in the community at the time (plausibly himself too!) recommended people work at OpenAI’s safety teams.
He mentions in his comment that he is happy that Paul and Chris get more money at OpenAI than they would have had otherwise, the same reasoning would have applied to other researchers working with them.
From my point of view, this is pretty damning. You picked one post, and the topmost answers featured two examples of support. The type that you would naturally and should clearly avoid with enemies.
To be clear, the LessWrong community has supported many times DeepMind, OpenAI and Anthropic, and at the same time, felt bad feelings about them too. This is quite a normal awkward situation in the absence of clear enmity.
This is not surprising. Enmity would have helped with clarifying this relationship and not committing this mistake.
Also, remember that I do not view enmity as a single-dimensional axis, and this is a major point of my thesis! My recommendation sums to: be more proactive in deeming others enemies, and at the same time, remain cordial, polite and professional with them.