Summary: I’m aware of a lot of examples of real debates that inspired this dialogue. It seems in those real cases, a lot of disagreement or criticism of public claims or accusations of lying of different professional organizations in effective altruism, or AI risk, have repeatedly been generically interpreted as a blanket refusal to honestly engage with the clams being made. Instead of a good-faith effort to resolve different kinds of disputes with public accusations of lying being made, repeat accusations, and justifications for them, are made into long, complicated theories. These theories don’t appear to respond at all to the content of the disagreements with the public accusations of lying and dishonesty, and that’s why these repeat accusations and justifications for them are poorly received.
These complicated theories don’t have anything to do with what people actually want when public accusations of dishonesty or lying are being made, what is typically called ‘hard’ (e.g., robust, empirical, etc.) evidence. If you were to make narrow claims of dishonesty with more modest language, based on just the best evidence you have, and being willing to defend the claim based on that; instead of making broad claims of dishonesty with ambiguous language, based on complicated theories, they would be received better. That doesn’t mean the theories of how dishonesty functions in communities, as an exploration of social epistemology, shouldn’t be written. It’s just that they do not come across as the most compelling evidence to substantiate public accusations of dishonesty.
For me it’s never been so complicated as to require involving decision theory. It’s as simple as some of the basic claims being made into much larger, more exaggerated or hyperbolic claims being a problem. They also come along with readers, presumably a general audience among the effective altruism or rationality communities, apparently needing to have prior knowledge of a bunch of things they may not be familiar with. They will only be able to parse the claims being made by reading a series of long, dense blog posts that don’t really emphasize the thing these communities should be most concerned about.
Sometimes the claims being made are that Givewell is being dishonest, and sometimes they are something like because of this the entire effective altruism movement has been totally compromised, and is also incorrigibly dishonest. There is disagreement, sometimes disputing how the numbers were used in the counterpoint to Givewell; and some about the hyperbolic claims made that appear as though they’re intended to smear more people than whoever at Givewell, or who else in the EA community, is responsible. It appears as though people like you or Ben don’t sort through, try parsing, and working through these different disagreements or criticisms. It appears as though you just take all that at face value as confirmation the rest of the EA community doesn’t want to hear the truth, and that people worship Givewell at the expense of any honesty, or something.
It’s in my experience too, that with these discussions of complicated subjects that appear very truncated for those unfamiliar, that the instructions are just to go read some much larger body of writing or theory to understand why and how people deceiving themselves, each other, and the public in the ways you’re claiming. This is often said as if it’s completely reasonable to claim it’s the responsibility of a bunch of people with other criticisms or disagreements with what you’re saying to go read tons of other content, when you are calling people liars, instead of you being able to say what you’re trying to say in a different way.
I’m not even saying that you shouldn’t publicly accuse people of being liars if you really think they’re lying. In cases of a belief that Givewell or other actors in effective altruism have failed to change their public messaging in the face of, by their own convictions, being correctly pointed out as them being wrong, then just say that. It’s not necessary to claim that thus the entire effective altruism community are also dishonest. That is especially the case for members of the EA community who disagree with you, not because they dishonestly refused the facts they were confronted with, but because they were disputing the claims being made, and their interlocutor refused to engage, or deflected all kinds of disagreements.
I’m sure there are lots of responses to criticisms of EA which have been needlessly hostile. Yet reacting, and writing strings of posts as though, the whole body of responses were consistent in just being garbage, is just not accurate of the responses you and Ben have received. Again, if you want to write long essays about what rational implications how people react to public accusations of dishonesty has for social epistemology, that’s fine. It would just suit most people better if that was done entirely separately from the accusations of dishonesty. If you’re publicly accusing some people of being dishonest, just accuse those and only those people of being dishonest very specifically. Stop tarring so many other people with such a broad brush.
I haven’t read your recent article accusing some actors in AI alignment of being liars. This dialogue seems like it is both about that, and a response to other examples. I’m mostly going off those other examples. If you want to say someone is being dishonest, just say that. Substantiate it with what the closest thing you have to hard or empirical evidence that some kind of dishonesty is going on. It’s not going to work with an idiosyncratic theory of how what someone is saying meets some kind of technical definition of dishonesty that defies common sense. I’m very critical of a lot of things that happen in effective altruism myself. It’s just that the way that you and Ben have gone about it is so poorly executed, and backfires so much, I don’t think there is any chance of you resolving the problems you’re trying to resolve with your typical approaches.
So, I’ve given up on keeping up with the articles you’re writing criticizing things in effective altruism happening, at least on a regular basis. Sometimes others nudge me to look at them. I might get around to them eventually. It’s honestly at the point, though, where the pattern I’ve learned to follow is to not being open-minded that the criticisms being made of effective altruism are worth taking seriously.
The problem I have isn’t the problems being pointed out, or that different organizations are being criticized for their alleged mistakes. It’s how the presentation of the problem, and the criticism being made, are often so convoluted I can’t understand them, and that’s before I can figure out if I agree or not. I find that I am generally more open-minded than most people in effective altruism to take seriously criticisms made of the community, or related organizations. Yet I’ve learned to suspend that for the criticisms you and Ben make, for the reasons I gave, because it’s just not worth the time and effort to do so.
This is a fictional dialogue demonstrating a meta-level point about how discourse works, and your comment is pretty off-topic. If you want to comment on my AI timelines post, do that (although you haven’t read it so I don’t even know which of my content you’re trying to comment on).
This is a fictional dialogue demonstrating a meta-level point about how discourse works, and your comment is pretty off-topic.
I think that if a given “meta-level point” has obvious ties to existing object-level discussions, then attempting to suppress the object-level points when they’re raised in response is pretty disingenuous. (What I would actually prefer is for the person making the meta-level point to be the same person pointing out the object-level connection, complete with “and here is why I feel this meta-level point is relevant to the object level”. If the original poster doesn’t do that, then it does indeed make comments on the object-level issues seem “off-topic”, a fact which ought to be laid at the feet of the original poster for not making the connection explicit, rather than at the feet of the commenter, who correctly perceived the implications.)
Now, perhaps it’s the case that your post actually had nothing to do with the conversations surrounding EA or whatever. (I find this improbable, but that’s neither here nor there.) If so, then you as a writer ought to have picked a different example, one with fewer resemblances to the ongoing discussion. (The example Jeff gave in his top-level comment, for example, is not only clearer and more effective at conveying your “meta-level point”, but also bears significantly less resemblance to the controversy around EA.) The fact that the example you chose so obviously references existing discussions that multiple commenters pointed it out is evidence that either (a) you intended for that to happen, or (b) you really didn’t put a lot of thought into picking a good example.
I shouldn’t have to argue about the object-level political consequences of 1+4=5 in a post arguing exactly that. This is the analytic synthetic distinction / logical uncertainty / etc.
Yes, I could have picked a better less political example, as recommended in Politics is the Mind Killer. In retrospect, that would have caused less confusion.
Anyway, Evan has the option of commenting on my AI timelines post, open thread, top level post, shortform, etc.
In metaphysical conflicts people don’t win by coming up with the best evidence, they win by controlling what gets counted as evidence. By default, memeplexes gain stability by creating an environment in which evidence against them can’t be taken seriously. Arguments that EA has failed to actually measure the things it claims are worth measuring should be taken very seriously on their face, since that is core to the claims of moral obligation (which is itself a bad frame, but less serious.)
Summary: I’m aware of a lot of examples of real debates that inspired this dialogue. It seems in those real cases, a lot of disagreement or criticism of public claims or accusations of lying of different professional organizations in effective altruism, or AI risk, have repeatedly been generically interpreted as a blanket refusal to honestly engage with the clams being made. Instead of a good-faith effort to resolve different kinds of disputes with public accusations of lying being made, repeat accusations, and justifications for them, are made into long, complicated theories. These theories don’t appear to respond at all to the content of the disagreements with the public accusations of lying and dishonesty, and that’s why these repeat accusations and justifications for them are poorly received.
These complicated theories don’t have anything to do with what people actually want when public accusations of dishonesty or lying are being made, what is typically called ‘hard’ (e.g., robust, empirical, etc.) evidence. If you were to make narrow claims of dishonesty with more modest language, based on just the best evidence you have, and being willing to defend the claim based on that; instead of making broad claims of dishonesty with ambiguous language, based on complicated theories, they would be received better. That doesn’t mean the theories of how dishonesty functions in communities, as an exploration of social epistemology, shouldn’t be written. It’s just that they do not come across as the most compelling evidence to substantiate public accusations of dishonesty.
For me it’s never been so complicated as to require involving decision theory. It’s as simple as some of the basic claims being made into much larger, more exaggerated or hyperbolic claims being a problem. They also come along with readers, presumably a general audience among the effective altruism or rationality communities, apparently needing to have prior knowledge of a bunch of things they may not be familiar with. They will only be able to parse the claims being made by reading a series of long, dense blog posts that don’t really emphasize the thing these communities should be most concerned about.
Sometimes the claims being made are that Givewell is being dishonest, and sometimes they are something like because of this the entire effective altruism movement has been totally compromised, and is also incorrigibly dishonest. There is disagreement, sometimes disputing how the numbers were used in the counterpoint to Givewell; and some about the hyperbolic claims made that appear as though they’re intended to smear more people than whoever at Givewell, or who else in the EA community, is responsible. It appears as though people like you or Ben don’t sort through, try parsing, and working through these different disagreements or criticisms. It appears as though you just take all that at face value as confirmation the rest of the EA community doesn’t want to hear the truth, and that people worship Givewell at the expense of any honesty, or something.
It’s in my experience too, that with these discussions of complicated subjects that appear very truncated for those unfamiliar, that the instructions are just to go read some much larger body of writing or theory to understand why and how people deceiving themselves, each other, and the public in the ways you’re claiming. This is often said as if it’s completely reasonable to claim it’s the responsibility of a bunch of people with other criticisms or disagreements with what you’re saying to go read tons of other content, when you are calling people liars, instead of you being able to say what you’re trying to say in a different way.
I’m not even saying that you shouldn’t publicly accuse people of being liars if you really think they’re lying. In cases of a belief that Givewell or other actors in effective altruism have failed to change their public messaging in the face of, by their own convictions, being correctly pointed out as them being wrong, then just say that. It’s not necessary to claim that thus the entire effective altruism community are also dishonest. That is especially the case for members of the EA community who disagree with you, not because they dishonestly refused the facts they were confronted with, but because they were disputing the claims being made, and their interlocutor refused to engage, or deflected all kinds of disagreements.
I’m sure there are lots of responses to criticisms of EA which have been needlessly hostile. Yet reacting, and writing strings of posts as though, the whole body of responses were consistent in just being garbage, is just not accurate of the responses you and Ben have received. Again, if you want to write long essays about what rational implications how people react to public accusations of dishonesty has for social epistemology, that’s fine. It would just suit most people better if that was done entirely separately from the accusations of dishonesty. If you’re publicly accusing some people of being dishonest, just accuse those and only those people of being dishonest very specifically. Stop tarring so many other people with such a broad brush.
I haven’t read your recent article accusing some actors in AI alignment of being liars. This dialogue seems like it is both about that, and a response to other examples. I’m mostly going off those other examples. If you want to say someone is being dishonest, just say that. Substantiate it with what the closest thing you have to hard or empirical evidence that some kind of dishonesty is going on. It’s not going to work with an idiosyncratic theory of how what someone is saying meets some kind of technical definition of dishonesty that defies common sense. I’m very critical of a lot of things that happen in effective altruism myself. It’s just that the way that you and Ben have gone about it is so poorly executed, and backfires so much, I don’t think there is any chance of you resolving the problems you’re trying to resolve with your typical approaches.
So, I’ve given up on keeping up with the articles you’re writing criticizing things in effective altruism happening, at least on a regular basis. Sometimes others nudge me to look at them. I might get around to them eventually. It’s honestly at the point, though, where the pattern I’ve learned to follow is to not being open-minded that the criticisms being made of effective altruism are worth taking seriously.
The problem I have isn’t the problems being pointed out, or that different organizations are being criticized for their alleged mistakes. It’s how the presentation of the problem, and the criticism being made, are often so convoluted I can’t understand them, and that’s before I can figure out if I agree or not. I find that I am generally more open-minded than most people in effective altruism to take seriously criticisms made of the community, or related organizations. Yet I’ve learned to suspend that for the criticisms you and Ben make, for the reasons I gave, because it’s just not worth the time and effort to do so.
This is a fictional dialogue demonstrating a meta-level point about how discourse works, and your comment is pretty off-topic. If you want to comment on my AI timelines post, do that (although you haven’t read it so I don’t even know which of my content you’re trying to comment on).
I think that if a given “meta-level point” has obvious ties to existing object-level discussions, then attempting to suppress the object-level points when they’re raised in response is pretty disingenuous. (What I would actually prefer is for the person making the meta-level point to be the same person pointing out the object-level connection, complete with “and here is why I feel this meta-level point is relevant to the object level”. If the original poster doesn’t do that, then it does indeed make comments on the object-level issues seem “off-topic”, a fact which ought to be laid at the feet of the original poster for not making the connection explicit, rather than at the feet of the commenter, who correctly perceived the implications.)
Now, perhaps it’s the case that your post actually had nothing to do with the conversations surrounding EA or whatever. (I find this improbable, but that’s neither here nor there.) If so, then you as a writer ought to have picked a different example, one with fewer resemblances to the ongoing discussion. (The example Jeff gave in his top-level comment, for example, is not only clearer and more effective at conveying your “meta-level point”, but also bears significantly less resemblance to the controversy around EA.) The fact that the example you chose so obviously references existing discussions that multiple commenters pointed it out is evidence that either (a) you intended for that to happen, or (b) you really didn’t put a lot of thought into picking a good example.
I shouldn’t have to argue about the object-level political consequences of 1+4=5 in a post arguing exactly that. This is the analytic synthetic distinction / logical uncertainty / etc.
Yes, I could have picked a better less political example, as recommended in Politics is the Mind Killer. In retrospect, that would have caused less confusion.
Anyway, Evan has the option of commenting on my AI timelines post, open thread, top level post, shortform, etc.
In metaphysical conflicts people don’t win by coming up with the best evidence, they win by controlling what gets counted as evidence. By default, memeplexes gain stability by creating an environment in which evidence against them can’t be taken seriously. Arguments that EA has failed to actually measure the things it claims are worth measuring should be taken very seriously on their face, since that is core to the claims of moral obligation (which is itself a bad frame, but less serious.)