I think that the people in the ecosystem you’re criticizing would not approve of Scott’s post.
I think this is false. I expect Scott’s post to be heavily upvoted, if it was posted to the EA Forum to have an enormously positive agree/disagree ratio, and in-general for people to believe something pretty close to it.
There are a few exceptions (somewhat ironically a good chunk of the cG AI-risk people), but they would be relatively sparse. I think this is roughly what someone who is smart, but doesn’t have a strong inside-view take about what they should do about AI-risk believes that they should act like if they want to be a good member of the EA community. My guess is it’s also pretty close to what leadership at cG, CEA and Anthropic believe, plus it would poll pretty well at a thing like SES.
He’s just a guy in it, and one who isn’t even that closely connected to Anthropic or Coefficient Giving people.
The issue is of course not that Scott is right or wrong about what Anthropic or cG people believe. The issue is that he seems to be taking a view where you should be super strategic in your communications, sneer at anyone who is open about things, and measure your success in how many of your friends are now at the levers of power.
I think this is not a good summary of what Coefficient Giving has done.
I think cG’s funding decisions were really very centrally about trying to punish people who weren’t being strategic in their communications in the way that Dustin wanted them to be strategic in their communication’s.
I think other “all kinds of complicated adversarial shit” has also happened, though it’s harder to point to. At a minimum I will point to the fact that invitation decisions to things like SES have followed similar adversarial “you aren’t cooperating with our strategic communications” principles.
I think this is false. I expect Scott’s post to be heavily upvoted, if it was posted to the EA Forum to have an enormously positive agree/disagree ratio, and in-general for people to believe something pretty close to it.
The EA Forum is a trash fire, so who knows what would happen if this was published there.
My read of the social dynamics is that in places where people are inclined to defer to me or people like me, they might initially approve of the Scott thing for bad tribal reasons, but change their mind when they read criticism of it from me or someone like me (which is ofc part of why I sometimes bother commenting on things like this).
My guess is it’s also pretty close to what leadership at cG, CEA and Anthropic believe, plus it would poll pretty well at a thing like SES.
I think that Scott’s post would not overall be received positively by those people. Maybe you’re saying that one of the directions argued for by Scott’s post is approved of by those people? I agree with that more.
My read of the social dynamics is that in places where people are inclined to defer to me or people like me, they might initially approve of the Scott thing for bad tribal reasons, but change their mind when they read criticism of it from me or someone like me
Well, I mean, that is a hard conditional to be false since if people were to not change their mind, this would largely invalidate the premise that they are declined to defer to you. Unfortunately, I both think the vast majority of places in EA do not defer to you or people like you, and furthermore, I also think you are pretty importantly wrong about your criticisms, so I don’t quite know how to feel about this.
I do think it helps and am marginally happy about your cultural influence here (though it’s tricky, I also think a bunch of your takes here are quite dumb). I think the vast majority of the cultural influence here is downstream of not quite anyone in-particular, but more Anthropic than anywhere else, and neither you nor me can change that very much.
I think that Scott’s post would not overall be received positively by those people.
Yeah, I expect it to be straightforwardly positively received. I think people will be like “some parts of this seem dumb, the Ilya thing in-particular, but yeah, fuck those rationalists and MIRI people, I am with Scott on that”.
To be clear, I am not expecting consensus here, I think this will be what 75% of people who have any opinion at all on anything adjacent on this believe, but I expect people would broadly think it’s a good contribution that properly establishes norms and reflects how they think about things.
I also think it’s plausible people would be like “wow, what an uncough way that both of these people are interfacing with each other, please get away from each other children”, but then actually if you talked to them afterwards, they would be like “yeah, I mean, that was a bit of a shitshow but I do think Scott was basically right here (minus 1-2 minor things)”.
I am not enormously confident on this, but it matches my experiences of the space.
I agree with Habryka that absent criticism Scott’s post would be well received by an important group of people reasonably characterized as EA-ish AI safety people.
Imo absent criticism Rob’s post would be well received by a different group of people reasonably characterized as doomers. (Literally right before seeing this thread I saw another post on LW that is directionally correct but is mostly wrong or exaggerated in its details, and that was very well received.)
Both posts are broadly wrong about lots of things, about equally so, such that most people would be better off having never encountered either of them.
Tbc, my first-order intuitive impression is that Scott’s post is much more directionally accurate. But I expect that is because I constantly experience people knifing me, pushing me to take strategies that systematically destroy my ability to do anything while gaining approximately no safety benefit, or making claims about members of groups that include me that are false of me, whereas I don’t really experience any of the stuff that Rob gestures at, even though I expect it exists. Though Rob’s post doesn’t actually inform me of it, because his actual claims are false, and I cannot infer the underlying experiences that led him to make them. Another example of trapped priors if you don’t have second order corrections. (Tbc his follow-up post makes this substantially clearer.)
You probably already know I think this, but imo you should both quit on making public discourse in the AI safety community non-insane, and do other things that have a shot at working. (Since I know this will be misinterpreted by other readers, let me be clear that there are plenty of other kinds of public writing that do not fall in that bucket which I do think are worth doing.)
I think this is false. I expect Scott’s post to be heavily upvoted, if it was posted to the EA Forum to have an enormously positive agree/disagree ratio, and in-general for people to believe something pretty close to it.
There are a few exceptions (somewhat ironically a good chunk of the cG AI-risk people), but they would be relatively sparse. I think this is roughly what someone who is smart, but doesn’t have a strong inside-view take about what they should do about AI-risk believes that they should act like if they want to be a good member of the EA community. My guess is it’s also pretty close to what leadership at cG, CEA and Anthropic believe, plus it would poll pretty well at a thing like SES.
The issue is of course not that Scott is right or wrong about what Anthropic or cG people believe. The issue is that he seems to be taking a view where you should be super strategic in your communications, sneer at anyone who is open about things, and measure your success in how many of your friends are now at the levers of power.
I think cG’s funding decisions were really very centrally about trying to punish people who weren’t being strategic in their communications in the way that Dustin wanted them to be strategic in their communication’s.
I think other “all kinds of complicated adversarial shit” has also happened, though it’s harder to point to. At a minimum I will point to the fact that invitation decisions to things like SES have followed similar adversarial “you aren’t cooperating with our strategic communications” principles.
The EA Forum is a trash fire, so who knows what would happen if this was published there.
My read of the social dynamics is that in places where people are inclined to defer to me or people like me, they might initially approve of the Scott thing for bad tribal reasons, but change their mind when they read criticism of it from me or someone like me (which is ofc part of why I sometimes bother commenting on things like this).
I think that Scott’s post would not overall be received positively by those people. Maybe you’re saying that one of the directions argued for by Scott’s post is approved of by those people? I agree with that more.
Well, I mean, that is a hard conditional to be false since if people were to not change their mind, this would largely invalidate the premise that they are declined to defer to you. Unfortunately, I both think the vast majority of places in EA do not defer to you or people like you, and furthermore, I also think you are pretty importantly wrong about your criticisms, so I don’t quite know how to feel about this.
I do think it helps and am marginally happy about your cultural influence here (though it’s tricky, I also think a bunch of your takes here are quite dumb). I think the vast majority of the cultural influence here is downstream of not quite anyone in-particular, but more Anthropic than anywhere else, and neither you nor me can change that very much.
Yeah, I expect it to be straightforwardly positively received. I think people will be like “some parts of this seem dumb, the Ilya thing in-particular, but yeah, fuck those rationalists and MIRI people, I am with Scott on that”.
To be clear, I am not expecting consensus here, I think this will be what 75% of people who have any opinion at all on anything adjacent on this believe, but I expect people would broadly think it’s a good contribution that properly establishes norms and reflects how they think about things.
I also think it’s plausible people would be like “wow, what an uncough way that both of these people are interfacing with each other, please get away from each other children”, but then actually if you talked to them afterwards, they would be like “yeah, I mean, that was a bit of a shitshow but I do think Scott was basically right here (minus 1-2 minor things)”.
I am not enormously confident on this, but it matches my experiences of the space.
In case it matters to either of you, my guesses:
I agree with Habryka that absent criticism Scott’s post would be well received by an important group of people reasonably characterized as EA-ish AI safety people.
Imo absent criticism Rob’s post would be well received by a different group of people reasonably characterized as doomers. (Literally right before seeing this thread I saw another post on LW that is directionally correct but is mostly wrong or exaggerated in its details, and that was very well received.)
Both posts are broadly wrong about lots of things, about equally so, such that most people would be better off having never encountered either of them.
Tbc, my first-order intuitive impression is that Scott’s post is much more directionally accurate. But I expect that is because I constantly experience people knifing me, pushing me to take strategies that systematically destroy my ability to do anything while gaining approximately no safety benefit, or making claims about members of groups that include me that are false of me, whereas I don’t really experience any of the stuff that Rob gestures at, even though I expect it exists. Though Rob’s post doesn’t actually inform me of it, because his actual claims are false, and I cannot infer the underlying experiences that led him to make them. Another example of trapped priors if you don’t have second order corrections. (Tbc his follow-up post makes this substantially clearer.)
You probably already know I think this, but imo you should both quit on making public discourse in the AI safety community non-insane, and do other things that have a shot at working. (Since I know this will be misinterpreted by other readers, let me be clear that there are plenty of other kinds of public writing that do not fall in that bucket which I do think are worth doing.)