FWIW, I think it is correct for Eliezer to be derisive about these works, instead of just politely disagreeing.
Long story short, derision is an important negative signal that something should not be cooperated with. Couching words politely is inherently a weakening of that signal. See here for more details of my model.
I do know that this is beside the point you’re making, but it feels to me like there is some resentment about that derision here.
If that’s a claim that Eliezer wants to make (I’m not sure if it is!) I think he should make it explicitly and ideally argue for it. Even just making it more explicit what the claim is would allow others to counter-argue the claim, rather than leaving it implicit and unargued.[1] I think it’s dangerous for people to defer to Eliezer about whether or not it’s worth engaging with people who disagree with him, which limits the usefulness of claims without arguments.
Also, aside on the general dynamics here. (Not commenting on Eliezer in particular.) You say “derision is an important negative signal that something should not be cooperated with”. That’s in the passive voice, more accurate would be “derision is an important negative signal where the speaker warns the listener to not cooperate with the target of derision”. That’s consistent with “the speaker cares about the listener and warns the listener that the target isn’t useful for the listener to cooperate with”. But it’s also consistent with e.g. “it would be in the speakers interest for the listener to not cooperate with the target, and the speaker is warning the listener that the speaker might deride/punish/exclude the listener if they cooperate with the target”. General derision mixes together all these signals, and some of them are decidedly anti-epistemic.
For example, if the claim is “these people aren’t worth engaging with”, I think there are pretty good counter-arguments even before you start digging into the object-level: The people having a track record of being willing to publicly engage on the topics of debate, of being willing to publicly change their mind, of being open enough to differing views to give MIRI millions of dollars back when MIRI was more cash-constrained than they are now, and understanding points that Eliezer think are important better than most people Eliezer actually spends time arguing with.
To be clear, I don’t particularly think that Eliezer does want to make this claim. It’s just one possible way that “don’t cooperate with” could cash out here, if your hypothesis is correct.
Adele: “Long story short, derision is an important negative signal that something should not be cooperated with”
Lukas: “If that’s a claim that Eliezer wants to make (I’m not sure if it is!) I think he should make it explicitly and ideally argue for it.”
Habryka: “He has explicitly argued for it”
What version of the claim “something should not be cooperated with” is present + argued-for in that post? I thought that post was about the object level. (Which IMO seems like a better thing to argue about. I was just responding to Adele’s comment.)
I don’t think he is (nor should be) signaling that engaging with people who disagree is not worth it!
Acknowledged that that is more accurate. I do not dispute that that people misuse derision and other status signals in lots of ways, but I think that this is more-or-less just a subtler form of lying/deception or coercion and not something inherently wrong with status. That is, I do not think you can have the same epistemic effect without being derisive in certain cases. Not that all derision is a good signal.
Ok. If you think it’s correct for Eliezer to be derisive, because he’s communicating the valuable information that something shouldn’t be “cooperated with”, can you say more specifically what that means? “Not engage” was speculation on my part, because that seemed like a salient way to not be cooperative in an epistemic conflict.
My read is that the cooperation he is against is with the narrative that AI-risk is not that important (because it’s too far away or weird or whatever). This indeed influences which sorts of agencies get funded, which is a key thing he is upset about here.
On the other hand, engaging with the arguments is cooperation at shared epistemics, which I’m sure he’s happy to coordinate with. Also, I think that if he thought that the arguments in question were coming from a genuine epistemic disagreement (and not motivated cognition of some form), he would (correctly) be less derisive. There is much more to be gained (in expectation) from engaging with an intellectually honest opponent than one with a bottom line.
My read is that the cooperation he is against is with the narrative that AI-risk is not that important (because it’s too far away or whatever). This indeed influences which sorts of agencies get funded, which is a key thing he is upset about here.
Hm, I still don’t really understand what it means to be [against cooperation with the narrative that AI risk is not that important]. Beyond just believing that AI risk is important and acting accordingly. (A position that seems easy to state explicitly.)
Also: The people whose work is being derided definitely don’t agree with the narrative that “AI risk is not that important”. (They are and were working full-time to reduce AI risk because they think it’s extremely important.) If the derisiveness is being read as a signal that “AI risk is important” is a point of contention, then the derisiveness is misinforming people. Or if the derisiveness was supposed to communicate especially strong disapproval of any (mistaken) views that would directionally suggest that AI risk is less important than the author thinks: then that would just seems like soldier mindset (more harshly critizing views that push in directions you don’t like, holding goodness-of-the-argument constant), which seems much more likely to muddy the epistemic waters than to send important signals.
FWIW, I think it is correct for Eliezer to be derisive about these works, instead of just politely disagreeing.
Long story short, derision is an important negative signal that something should not be cooperated with. Couching words politely is inherently a weakening of that signal. See here for more details of my model.
I do know that this is beside the point you’re making, but it feels to me like there is some resentment about that derision here.
If that’s a claim that Eliezer wants to make (I’m not sure if it is!) I think he should make it explicitly and ideally argue for it. Even just making it more explicit what the claim is would allow others to counter-argue the claim, rather than leaving it implicit and unargued.[1] I think it’s dangerous for people to defer to Eliezer about whether or not it’s worth engaging with people who disagree with him, which limits the usefulness of claims without arguments.
Also, aside on the general dynamics here. (Not commenting on Eliezer in particular.) You say “derision is an important negative signal that something should not be cooperated with”. That’s in the passive voice, more accurate would be “derision is an important negative signal where the speaker warns the listener to not cooperate with the target of derision”. That’s consistent with “the speaker cares about the listener and warns the listener that the target isn’t useful for the listener to cooperate with”. But it’s also consistent with e.g. “it would be in the speakers interest for the listener to not cooperate with the target, and the speaker is warning the listener that the speaker might deride/punish/exclude the listener if they cooperate with the target”. General derision mixes together all these signals, and some of them are decidedly anti-epistemic.
For example, if the claim is “these people aren’t worth engaging with”, I think there are pretty good counter-arguments even before you start digging into the object-level: The people having a track record of being willing to publicly engage on the topics of debate, of being willing to publicly change their mind, of being open enough to differing views to give MIRI millions of dollars back when MIRI was more cash-constrained than they are now, and understanding points that Eliezer think are important better than most people Eliezer actually spends time arguing with.
To be clear, I don’t particularly think that Eliezer does want to make this claim. It’s just one possible way that “don’t cooperate with” could cash out here, if your hypothesis is correct.
He has explicitly argued for it! He has written like a 10,000 word essay with lots of detailed critique:
https://www.lesswrong.com/posts/ax695frGJEzGxFBK4/biology-inspired-agi-timelines-the-trick-that-never-works
Adele: “Long story short, derision is an important negative signal that something should not be cooperated with”
Lukas: “If that’s a claim that Eliezer wants to make (I’m not sure if it is!) I think he should make it explicitly and ideally argue for it.”
Habryka: “He has explicitly argued for it”
What version of the claim “something should not be cooperated with” is present + argued-for in that post? I thought that post was about the object level. (Which IMO seems like a better thing to argue about. I was just responding to Adele’s comment.)
I don’t think he is (nor should be) signaling that engaging with people who disagree is not worth it!
Acknowledged that that is more accurate. I do not dispute that that people misuse derision and other status signals in lots of ways, but I think that this is more-or-less just a subtler form of lying/deception or coercion and not something inherently wrong with status. That is, I do not think you can have the same epistemic effect without being derisive in certain cases. Not that all derision is a good signal.
Ok. If you think it’s correct for Eliezer to be derisive, because he’s communicating the valuable information that something shouldn’t be “cooperated with”, can you say more specifically what that means? “Not engage” was speculation on my part, because that seemed like a salient way to not be cooperative in an epistemic conflict.
My read is that the cooperation he is against is with the narrative that AI-risk is not that important (because it’s too far away or weird or whatever). This indeed influences which sorts of agencies get funded, which is a key thing he is upset about here.
On the other hand, engaging with the arguments is cooperation at shared epistemics, which I’m sure he’s happy to coordinate with. Also, I think that if he thought that the arguments in question were coming from a genuine epistemic disagreement (and not motivated cognition of some form), he would (correctly) be less derisive. There is much more to be gained (in expectation) from engaging with an intellectually honest opponent than one with a bottom line.
Hm, I still don’t really understand what it means to be [against cooperation with the narrative that AI risk is not that important]. Beyond just believing that AI risk is important and acting accordingly. (A position that seems easy to state explicitly.)
Also: The people whose work is being derided definitely don’t agree with the narrative that “AI risk is not that important”. (They are and were working full-time to reduce AI risk because they think it’s extremely important.) If the derisiveness is being read as a signal that “AI risk is important” is a point of contention, then the derisiveness is misinforming people. Or if the derisiveness was supposed to communicate especially strong disapproval of any (mistaken) views that would directionally suggest that AI risk is less important than the author thinks: then that would just seems like soldier mindset (more harshly critizing views that push in directions you don’t like, holding goodness-of-the-argument constant), which seems much more likely to muddy the epistemic waters than to send important signals.
Yeah, those are good points… I think there is a conflict with the overall structure I’m describing, but I’m not modeling the details well apparently.
Thank you!