“if you write something that will predictably make people feel worse about [real person or org], you should stick to journalistic standards of citing sources and such”
This is a selective demand for rigour, which induces an extremely strong positivity bias when discussing other people. I would not willingly introduce such a strong bias.
I think other norms make sense, and do not lead to entire communities distorting their vision of the social world. Cordiality, politeness, courtesy and the like.
I think it’s very unlikely that having laxer standards for accusing others is a good thing.
I know you think so. And I disagree, especially on “~0% suffer from having too high standards” (my immediate reaction is that you are obviously rejecting the relevant evidence when you say this).
This is why I am thinking of having an article specifically about this, specifically tailored to Lesswrong.
To varying degrees. People are probably less negative on Anthropic than OpenAI. We’re certainly not enthusiastic about OpenAI.. In any case I don’t think it summarizes to “the Lesswrong community has supported” these orgs.
Have you read the most upvoted responses to your link?
The conclusion of the second most voted one, from Ben Pace, is “Overall I don’t feel my opinion is very robust, and could easily change.”, and “And of course I’m very happy indeed about a bunch of the safety work they do and support. The org give lots of support and engineers to people like Paul Christiano, Chris Olah, etc”. For reference, Paul Christiano’s “safety work” included RLHF, which was instrumental to ChatGPT.
From my point of view, you are painfully wrong about this, and indeed, Lesswrong should have had much more enmity toward OpenAI, instead of recommending people work there because of safety.
Have you read the most upvoted responses to your link?
yes. I don’t think any of them suggest that LessWrong is supporting or enthusiastic about OpenAI. (In particular, whether you should work there doesn’t have much relation to whether the company as a whole is a net negative.) I would describe the stance of top 2 comments on that post as mixed[1] and of LW’s stance in general as mixed-to-negative.
Lesswrong should have had much more enmity toward OpenAI,
Fwiw this is not a crux, I might agree that we should be more negative toward OpenAI than we are. I don’t think that’s an argument for laxer standards of critcism. Standards for rigor should lead toward higher quality criticism, not less harsh criticism. If you had attacked Greenpeace twice as much but had substantiated all your claims, I wouldn’t have downvoted the post. I’d guess that the net effectiveness of a community’s criticism of a person or org goes up with stricter norms.
e.g., Ben pace also says, “An obvious reason to think OpenAI’s impact will be net negative is that they seem to be trying to reach AGI as fast as possible, and trying a route different from DeepMind and other competitors, so are in some world shortening the timeline until AI. (I’m aware that there are arguments about why a shorter timeline is better, but I’m not sold on them right now.)”
By the way, tone doesn’t come across well in writing. To be fair, even orally, I am often a bit abrasive.
So just to be clear: I’m thankful that you’re engaging with the conversation. Furthermore, I am assuming that you are doing so genuinely, so thanks for that too.
yes. I don’t think any of them suggest that LessWrong is supporting or enthusiastic about OpenAI
I think you may have misread what I wrote.
My statements were that the LessWrong community has supported DeepMind, OpenAI and Anthropic, and that it had friends in all three companies.
I did not state that it was enthusiastic about it, and much less so that it currently is. When I say “has supported”, I literally mean that it has supported them. Eliezer introducing Demis and Thiel, Paul Christiano doing RLHF at OpenAI and helping with ChatGPT, the whole cluster founding Anthropic, all the people safety-washing the companies, etc. I didn’t make a grand statement about its feelings, just a pragmatic one about some of its actions.
Nevertheless a reaction to my statements, you picked up a thread the top answer recommends people work at OpenAI, and where the second topmost answer expresses happiness at capabilities (Paul’s RLHF) work.
How could he have known that Paul’s work would lead to capabilities 2 years before ChatGPT? By using enmity and keeping in mind that an organisation that races to AGI will leverage all of its internal research (including the one labelled “safety”) for capabilities.
I don’t know how you did footnotes in comments, but...
For instance, the context of Ben Pace’s response was one when many people in the community at the time (plausibly himself too!) recommended people work at OpenAI’s safety teams.
From my point of view, this is pretty damning. You picked one post, and the topmost answers featured two examples of support. The type that you would naturally and should clearly avoid with enemies.
To be clear, the LessWrong community has supported many times DeepMind, OpenAI and Anthropic, and at the same time, felt bad feelings about them too. This is quite a normal awkward situation in the absence of clear enmity.
This is not surprising. Enmity would have helped with clarifying this relationship and not committing this mistake.
Also, remember that I do not view enmity as a single-dimensional axis, and this is a major point of my thesis! My recommendation sums to: be more proactive in deeming others enemies, and at the same time, remain cordial, polite and professional with them.
This is a selective demand for rigour, which induces an extremely strong positivity bias when discussing other people. I would not willingly introduce such a strong bias.
I think other norms make sense, and do not lead to entire communities distorting their vision of the social world. Cordiality, politeness, courtesy and the like.
I know you think so. And I disagree, especially on “~0% suffer from having too high standards” (my immediate reaction is that you are obviously rejecting the relevant evidence when you say this).
This is why I am thinking of having an article specifically about this, specifically tailored to Lesswrong.
Have you read the most upvoted responses to your link?
Its conclusion is “I think people who take safety seriously should consider working at OpenAI” (with the link to its job page!)
The conclusion of the second most voted one, from Ben Pace, is “Overall I don’t feel my opinion is very robust, and could easily change.”, and “And of course I’m very happy indeed about a bunch of the safety work they do and support. The org give lots of support and engineers to people like Paul Christiano, Chris Olah, etc”. For reference, Paul Christiano’s “safety work” included RLHF, which was instrumental to ChatGPT.
From my point of view, you are painfully wrong about this, and indeed, Lesswrong should have had much more enmity toward OpenAI, instead of recommending people work there because of safety.
yes. I don’t think any of them suggest that LessWrong is supporting or enthusiastic about OpenAI. (In particular, whether you should work there doesn’t have much relation to whether the company as a whole is a net negative.) I would describe the stance of top 2 comments on that post as mixed [1] and of LW’s stance in general as mixed-to-negative.
Fwiw this is not a crux, I might agree that we should be more negative toward OpenAI than we are. I don’t think that’s an argument for laxer standards of critcism. Standards for rigor should lead toward higher quality criticism, not less harsh criticism. If you had attacked Greenpeace twice as much but had substantiated all your claims, I wouldn’t have downvoted the post. I’d guess that the net effectiveness of a community’s criticism of a person or org goes up with stricter norms.
e.g., Ben pace also says, “An obvious reason to think OpenAI’s impact will be net negative is that they seem to be trying to reach AGI as fast as possible, and trying a route different from DeepMind and other competitors, so are in some world shortening the timeline until AI. (I’m aware that there are arguments about why a shorter timeline is better, but I’m not sold on them right now.)”
By the way, tone doesn’t come across well in writing. To be fair, even orally, I am often a bit abrasive.
So just to be clear: I’m thankful that you’re engaging with the conversation. Furthermore, I am assuming that you are doing so genuinely, so thanks for that too.
I think you may have misread what I wrote.
My statements were that the LessWrong community has supported DeepMind, OpenAI and Anthropic, and that it had friends in all three companies.
I did not state that it was enthusiastic about it, and much less so that it currently is. When I say “has supported”, I literally mean that it has supported them. Eliezer introducing Demis and Thiel, Paul Christiano doing RLHF at OpenAI and helping with ChatGPT, the whole cluster founding Anthropic, all the people safety-washing the companies, etc. I didn’t make a grand statement about its feelings, just a pragmatic one about some of its actions.
Nevertheless a reaction to my statements, you picked up a thread the top answer recommends people work at OpenAI, and where the second topmost answer expresses happiness at capabilities (Paul’s RLHF) work.
How could he have known that Paul’s work would lead to capabilities 2 years before ChatGPT? By using enmity and keeping in mind that an organisation that races to AGI will leverage all of its internal research (including the one labelled “safety”) for capabilities.
I don’t know how you did footnotes in comments, but...
For instance, the context of Ben Pace’s response was one when many people in the community at the time (plausibly himself too!) recommended people work at OpenAI’s safety teams.
He mentions in his comment that he is happy that Paul and Chris get more money at OpenAI than they would have had otherwise, the same reasoning would have applied to other researchers working with them.
From my point of view, this is pretty damning. You picked one post, and the topmost answers featured two examples of support. The type that you would naturally and should clearly avoid with enemies.
To be clear, the LessWrong community has supported many times DeepMind, OpenAI and Anthropic, and at the same time, felt bad feelings about them too. This is quite a normal awkward situation in the absence of clear enmity.
This is not surprising. Enmity would have helped with clarifying this relationship and not committing this mistake.
Also, remember that I do not view enmity as a single-dimensional axis, and this is a major point of my thesis! My recommendation sums to: be more proactive in deeming others enemies, and at the same time, remain cordial, polite and professional with them.