Summary: I’m aware of a lot of examples of real debates that inspired this dialogue. It seems in those real cases, a lot of disagreement or criticism of public claims or accusations of lying of different professional organizations in effective altruism, or AI risk, have repeatedly been generically interpreted as a blanket refusal to honestly engage with the clams being made. Instead of a good-faith effort to resolve different kinds of disputes with public accusations of lying being made, repeat accusations, and justifications for them, are made into long, complicated theories. These theories don’t appear to respond at all to the content of the disagreements with the public accusations of lying and dishonesty, and that’s why these repeat accusations and justifications for them are poorly received.
These complicated theories don’t have anything to do with what people actually want when public accusations of dishonesty or lying are being made, what is typically called ‘hard’ (e.g., robust, empirical, etc.) evidence. If you were to make narrow claims of dishonesty with more modest language, based on just the best evidence you have, and being willing to defend the claim based on that; instead of making broad claims of dishonesty with ambiguous language, based on complicated theories, they would be received better. That doesn’t mean the theories of how dishonesty functions in communities, as an exploration of social epistemology, shouldn’t be written. It’s just that they do not come across as the most compelling evidence to substantiate public accusations of dishonesty.
For me it’s never been so complicated as to require involving decision theory. It’s as simple as some of the basic claims being made into much larger, more exaggerated or hyperbolic claims being a problem. They also come along with readers, presumably a general audience among the effective altruism or rationality communities, apparently needing to have prior knowledge of a bunch of things they may not be familiar with. They will only be able to parse the claims being made by reading a series of long, dense blog posts that don’t really emphasize the thing these communities should be most concerned about.
Sometimes the claims being made are that Givewell is being dishonest, and sometimes they are something like because of this the entire effective altruism movement has been totally compromised, and is also incorrigibly dishonest. There is disagreement, sometimes disputing how the numbers were used in the counterpoint to Givewell; and some about the hyperbolic claims made that appear as though they’re intended to smear more people than whoever at Givewell, or who else in the EA community, is responsible. It appears as though people like you or Ben don’t sort through, try parsing, and working through these different disagreements or criticisms. It appears as though you just take all that at face value as confirmation the rest of the EA community doesn’t want to hear the truth, and that people worship Givewell at the expense of any honesty, or something.
It’s in my experience too, that with these discussions of complicated subjects that appear very truncated for those unfamiliar, that the instructions are just to go read some much larger body of writing or theory to understand why and how people deceiving themselves, each other, and the public in the ways you’re claiming. This is often said as if it’s completely reasonable to claim it’s the responsibility of a bunch of people with other criticisms or disagreements with what you’re saying to go read tons of other content, when you are calling people liars, instead of you being able to say what you’re trying to say in a different way.
I’m not even saying that you shouldn’t publicly accuse people of being liars if you really think they’re lying. In cases of a belief that Givewell or other actors in effective altruism have failed to change their public messaging in the face of, by their own convictions, being correctly pointed out as them being wrong, then just say that. It’s not necessary to claim that thus the entire effective altruism community are also dishonest. That is especially the case for members of the EA community who disagree with you, not because they dishonestly refused the facts they were confronted with, but because they were disputing the claims being made, and their interlocutor refused to engage, or deflected all kinds of disagreements.
I’m sure there are lots of responses to criticisms of EA which have been needlessly hostile. Yet reacting, and writing strings of posts as though, the whole body of responses were consistent in just being garbage, is just not accurate of the responses you and Ben have received. Again, if you want to write long essays about what rational implications how people react to public accusations of dishonesty has for social epistemology, that’s fine. It would just suit most people better if that was done entirely separately from the accusations of dishonesty. If you’re publicly accusing some people of being dishonest, just accuse those and only those people of being dishonest very specifically. Stop tarring so many other people with such a broad brush.
I haven’t read your recent article accusing some actors in AI alignment of being liars. This dialogue seems like it is both about that, and a response to other examples. I’m mostly going off those other examples. If you want to say someone is being dishonest, just say that. Substantiate it with what the closest thing you have to hard or empirical evidence that some kind of dishonesty is going on. It’s not going to work with an idiosyncratic theory of how what someone is saying meets some kind of technical definition of dishonesty that defies common sense. I’m very critical of a lot of things that happen in effective altruism myself. It’s just that the way that you and Ben have gone about it is so poorly executed, and backfires so much, I don’t think there is any chance of you resolving the problems you’re trying to resolve with your typical approaches.
So, I’ve given up on keeping up with the articles you’re writing criticizing things in effective altruism happening, at least on a regular basis. Sometimes others nudge me to look at them. I might get around to them eventually. It’s honestly at the point, though, where the pattern I’ve learned to follow is to not being open-minded that the criticisms being made of effective altruism are worth taking seriously.
The problem I have isn’t the problems being pointed out, or that different organizations are being criticized for their alleged mistakes. It’s how the presentation of the problem, and the criticism being made, are often so convoluted I can’t understand them, and that’s before I can figure out if I agree or not. I find that I am generally more open-minded than most people in effective altruism to take seriously criticisms made of the community, or related organizations. Yet I’ve learned to suspend that for the criticisms you and Ben make, for the reasons I gave, because it’s just not worth the time and effort to do so.
BTW, it might be worth separating out the case where controversial topics are being discussed vs boring everyday stuff. If you say something on a controversial topic, you are likely to get downvotes regardless of your position. “strong, consistent, vocal support” for a position which is controversial in society at large typically only happens if the forum has become an echo chamber, in my observation.
On a society-wide scale, “boring everyday stuff” is uncontroversial by definition. Conversely, articles that have a high total number of votes, but a close-to-even upvote:downvote ratio, are by definition controversial to at least several people. If wrong-headed views of boring everyday stuff aren’t heavily downvoted, and are “controversial” to the point half or more of the readers supported someone spreading supposedly universally recognizable nonsense, that’s a serious problem.
Also, regarding the EA Forum and LW, at least, “controversial topics” vs. “boring everyday stuff” is a false dichotomy. These fora are fora for all kinds of “weird” stuff, by societal standards. Some of popular positions on the EA Forum and LW are also controversial, but that’s normal for EA and LW. What going by societal standards doesn’t reflect is why different positions are or aren’t controversial on the EA Forum or LW, and why. There are heated disagreements in EA, or on LW, for when most people outside those fora don’t care about any side of those debates. For the examples I have in mind, some of the articles were on topics that were controversial in society at large, and then some that were only controversial disagreements in a more limited sense on the EA Forum or LW.
You make a good point I forgot to add: the function karma on an article or comment serves in providing info to other users, as opposed to just the submitting user. That’s something people should keep in mind.
What bugs me is when people who ostensibly aspire to understand reality better let their sensitivity get in the way, and let their feelings colour the reality of how their ideas are being received. It seems to me this should be a basic skill of debiasing that people would employ if they were as serious about being effective or rational thinkers as they claim to be. If there is anything that bugs me you’re suspicious of, it’s that.
Typically, I agree with an OP who is upset about the low quality of negative comments, but I disagree with how upset they get about it. The things they say as a result are often inaccurate. For example, people will say because of a few comments worth of low-quality negative feedback on a post that’s otherwise decently upvoted that negative reception is typical of LW, or the EA Forum. They may not be satisfied with the reception they’ve received on an article. That’s just a different claim than their reception was extremely negative.
I don’t agree with how upset people are getting, though I do to think they’re typically correct the quality of some responses to their posts is disappointingly low. I wasn’t looking for a solution to a problem. I was asking an open-ended question to seek answers that would explain some behaviour on others’ part that doesn’t fully make sense to me. Some other answers I’ve gotten are just people speaking from their own experience, like G Gordon, and that’s fine by me too.
Some but not all academics also seek truth in terms of their own beliefs about the world, and their own processes (including hidden ones) for selecting the best model for any given decision. From a Hansonian perspective, that’s at least what scientists and philosophers are telling themselves. Yet from a Hansonian perspective, that’s what everyone is telling themselves about their ability to seek truth, especially if a lot of their ego is bound up in ‘truth-seeking’, including rationalists. So the Hansonian argument here would appear to be a perfectly symmetrical one.
I don’t have a survey on hand for what proportion of academia seek truth both in a theoretical sense, and a more pragmatic sense like rationalists aspire to do. Yet “academia”, considered as a population, it much larger than the rationality community, or a lot of other intellectual communities. So, even if the relative proportion of academics who could be considered a “truth-seeking community” in the eyes of rationalists is small, the absolute/total amount of academics who would be considered part of a “genuine truth-seeking community” in those same eyes would be large enough to take seriously.
To be fair, the friends I have in mind who are more academically minded, and are critical of the rationality community and LessWrong, are also critical of much of academia as well. For them it’s about aspiring to a greater and evermore critical intellectualism than it is sticking to academic norms. Philosophy tends to be a field in academia that tends to be more like this than most other academic fields, because philosophy has a tradition of being the most willing to criticize the epistemic practices of other academic fields. Again, this is a primary application of philosophy. There are different branches and specializations in philosophy, like the philosophies of: physics; biology; economics; art (i.e., aesthetics); psychology; politics; morality (i.e., ethics); and more.
The practice of philosophy at it’s most elementary level is a practice of ‘going meta’, which is an art many rationalists seek to master. So I think truth-seekers in philosophy, and in academia more broadly, are the ones rationalists should seek to interact with more, even if finding academics like that is hard. Of course, the most common way rationalists could find academics like that, is to look to academics already in the rationality community like that (there are plenty), and ask them if they know other people/communities they enjoy interacting with for reasons similar to why they enjoy interacting with rationalists.
There is more I could say on the subject of how learning from philosophy, academia, and other communities in a more charitable way could benefit the rationality community. They’re really only applicable if you either are part of an in-person/‘irl’ local rationalist community; or if you’re intellectually and emotionally open to criticisms and recommendations for improvement to the culture of the rationality community. If one or both of those conditions apply to you, I can go on.
One thing about this comment that really sticks out to me is the fact I know several people who think LessWrong and/or the rationality community aren’t that great at truth-seeking. There are a lot of specific domains where rationalists aren’t reported to be particularly good at truth-seeking. Presumably, that could be excused by the fact rationalists are generalists. However, I still know people who think the rationality community is generally bad at truth-seeking.
Those people tend to hail from philosophy. To be fair, ‘philosophy’, as a community, is one of the only other communities that I can think of that are interested in truth-seeking in as generalized way as the rationality community. You can ask the mods about it, but they’ve got some thoughts on how ‘LessWrong’ is a project of course strongly tied to but distinct from the ‘rationality community’. I’d associate with LessWrong more with truth-seeking than ‘the rationality community’, since if you ask a lot of rationalists, truth-seeking isn’t nearly all of what the community is about these days, and and truth-seeking isn’t even a primary draw for a lot of people.
Anyway, most philosophers don’t tend to think LessWrong is very good at seeking truth much of the time either. Again, to be fair, philosophers think lots of different kinds of people aren’t nearly as good at truth-seeking as they make themselves out to be, including all kinds of scientists. Doing that kind of thing comes with the territory of philosophy, but I digress.
The thing is about ‘philosophy’, as a human community, is, unlike rationality originating from LessWrong, is blended into the rest of the culture that ‘philosophers’ don’t congregate outside of academia like ‘rationalists’ do. ‘Scientists’ seem to tend to do that more than philosophers, but not more than rationalists. Yet for people who want to surround themselves with a whole community of like-minded others, all of them wouldn’t want to join academia to get that. Even for rationalists who have worked in academia, the fact the truth-seeking is more part of the profession than something weaved into the fabric of their lifestyles.
Of course, the whole point of this question was to figure out what truth-seeking communities are out there that rationalists would get along with. If rationalists aren’t perceived as good enough at truth-seeking for others to want to get along with them, which oftentimes appears to be the case, I don’t know what a rationalist should do about that. Of course, you didn’t mention truth-seeking, and I mentioned there are plenty of things rationalists are interested in other than truth-seeking. So, the solution I would suggest is for rationalists to route around that, and see if they can’t get along with people who share something in common with rationalists, that they also appreciate about rationalists, other than truth-seeking.
Hi Dayne. I’d like to join the Facebook group. How do I join?
The first thing I would think to look at to solve this problem is to look at cultural gaps between rationality and adjacent communities, especially based on how they interact in person, like effective altruism, startup culture, transhumanism, etc.
One thing I find interesting, as an example that may be particularly pertinent to some rationalists, is how effective altruism has, in spite of everything else, been robust to the kinds of schisms you’re talking about. In spite of all the differences between different factions of EA, it remains a grand coalition/alliance (of a sort). Each of the following subgroups of EA, usually built around a specific, preferred cause, in total has at a few hundred if not a couple thousand adherents in EA, and I expect would each be able to command millions of dollars in donations to their preferred charities each year, including:
high-impact/evidence-based global poverty alleviation (aka global health and development)
existential risk reduction (inclusive of AI risk as a distinct and primary subgroup, but focused on other potential x-risks as well)
effective animal advocacy (focused on farm animal welfare)
reducing wild animal suffering (focused on wild animal welfare)
While none of these subgroups of EA is wholly within EA, it’s very possible the majority of members of these communities also identifies as part of the EA community as well. An easy explanation is that everyone is sticking around for the Open Phil bucks, or the chance of receiving Open Phil bucks in the future, as a cause area’s increased prominence in EA is moderately-to-highly correlated with them receiving ⇒ $10^7/year within a few years, when before each area’s annual funding was probably ⇐ $10^5. Yet there isn’t a guarantee, and the barriers to access to these resources has been such that I’ve seen multiple of these subgroups openly and seriously consider splitting with EA. If any or all of these causes could sustain and grow themselves such that one or more of them might do better by investing its own resources into growing outside of EA, and securing its independence. However, as far as I can tell, there has never been a single, whole cause area of EA that has ‘exited’ the community. As the movement has existed for ~10 years, it seems unlikely that this would be the case if there wasn’t other factors contributing to the cohesion of such otherwise disparate groups.
I was thinking about something similar the other day. I was wondering if, from a historical perspective, it would be valid to look not just specific sects, but all Abrahamic religions, as ‘schisms’ from the original Judaism. One thing is that religious studies scholars and historians may see transformation of one sect into an unambiguously distinct religion as more of an ‘evolution’, like speciation in biology, than ‘schisms’, as we typically think of them in human societies.
The one thing I think this post is most missing, if it’s primarily aimed at rationalists, is how introverted rationalists can go about making new friends. I’ve met a lot of people drawn to the rationality community because they don’t know how to otherwise join of group of people to befriend, who they also have enough in common with that they would want to befriend them. Not saying that this isn’t a good article (I strongly upvoted it, based alone on how important I think this signal/message is), nor that I know the best way to write about “how to make more friends (outside the rationality community)”. I’m just saying if you have it in you, I think that would also be a post worth writing.
“Making new friends” or “Joining a New Group of Friends” or “Joining a New Community” might seem so obvious that it doesn’t merit writing up how rationalists can do that. Yet, again, I’ve met rationalists who before they joined the rationality community that thought themselves so unable to make new friends in adulthood, they consider themselves lucky to even have fallen ass-backwards into the rationality community.
I think this is an interesting question. I know some friends who know a lot more about philosophy than I do on social media. A lot of people who aren’t as well-read in philosophy only come across notions of what the ontology of what things like logic and mathematics might be through Platonism. I’m not as familiar with them myself, yet I’m aware there is a much wider variety of options for what constitutes the ontology of logic to explore beyond Platonism. I haven’t read Plato, and say I couldn’t rightly say what if anything is wrong with it. Yet learning about things like the ontology of logic has led me to think more recent and obscure options to explain such things than the Platonic realm are better. I don’t think they’re the kind of views someone with only a cursory understanding of academic philosophy would have heard of. I’ve been saying ‘things like the ontology of logic’, because I’ve actually thought more specifically about the ontology of mathematics. I’ve also talked to some friends who know much more than me about maths, logic, and philosophy. I would suggest looking into the following fields for a much greener and greater garden of potential answers to your question:
Philosophy of Mathematics
Philosophy of Mind
Philosophy of Logic
I will also ask my friends what answers they would give to this question, and then I will report them back here.
So, I’ve read the two posts on Benquo’s blog you’ve linked to. The first one “Bad Intent Is a Disposition, Not a Feeling”, depended on the claim he made that mens rea is not a real thing. As was pointed out in comments that he himself acknowledged those comments made some good points that would cause him to rethink the theme he was trying to impart with his original post. I looked up both the title of that post, and ‘mens rea’ on his blog to see if he had posted any updated thoughts on the subject. There weren’t results from the date of publication of that post onward on either of those topics on his blog, so it doesn’t appear he has publicly updated his thoughts on these topics. That was over 2 years ago.
The second post on the topic was more abstract and figurative, and was using some analogy and metaphor to get its conclusion across. So, I didn’t totally understand the relevance of all that in the second post to the first post, even though the second was intended as a sequel to the first. It seemed to me the crux of resolving the problem was:
Sadly, being honest about your sense that someone else is arguing in bad faith is Officially Not OK. It is read as a grave and inappropriate attack. And as long as that is the case, he could reasonably expect that bringing it up would lead to getting yelled at by everyone and losing the interaction. So maybe he felt and feels like he has no good options here.
Benquo’s conclusion that for public discourse and social epistemology, at least in his experience, that to be honest about your sense someone else is arguing in bad faith is Officially Not OK because it is always construed as a grave and inappropriate personal attack. So, resolving the issue appears socially or practically impossible. My experience is that just isn’t the case. It can lend itself to better modes of public discourse. One thing is it can move communities to states of discourse that are much different than where the EA and rationality communities currently are at. One problem is I’m not sure even those rationalists and EAs who are aware of such problems would prefer the options available, which would be just hopping onto different platforms with very different discourse norms. I would think that would be the most practical option, since the other viable alternative would be for these communities to adopt other communities’ discourse norms, and replace their own with them, wholesale. That seems extremely unlikely to happen.
Part of the problem is that it seems how Benquo construes ‘bad faith’ is as having an overly reductionistic definition. This was what was fleshed out in the comments on the original post on his blog, by commenters AGB and Res. So, that makes it hard for me to accept the frame Benquo bases his eventual conclusions off of. Another problem for me is the inferential distance gap between myself, Benquo, and the EA and rationality communities, respectively, are so large now that it would take a lot of effort to write them up and explain them all. Since it isn’t a super high priority for me, I’m not sure that I will get around to it. However, there is enough material in Benquo’s posts, and the discussion in the comments, that I can work with it to explain some of what I think is wrong with how he construes bad faith in these posts. If I write something like that up, I will post it on LW.
I don’t know if the EA community in large part disagrees with the OP for the same reasons I do. I think based off some of the material I have been provided with in the comments here, I have more to work with to find the cruxes of disagreement I have with how some people are thinking, whether critically or not, about the EA and rationality communities.
This is a post I, funnily, found both useful, and intend to, in other comments, intend to ‘tear to shreds’, so to speak. The first thing I would say is this article could be edited and formatted better. This is a relatively long post for LW, that nonetheless covers a great breadth of material rather briefly relative to the scope of the topics. I think having an introduction at the beginning that generally summarizes the different sections of your post at the beginning, it would be helpful for readers. You could also use formatting options for presenting formal logic or philosophy, and others, like subheadings, available on LW, that would make this article more readable on this site. I’d also say that you move through a lot of subjects very fast that it would be unrealistic to expect most readers to know enough about to put them altogether in the way you’re intending to understanding your conclusion. If you were to provide some links as resources to learn more about the subjects, or you were to expand on how the central theme(s) of this article relate to the different topics you bring up (e.g., theoretical physics, quantum computing, AI, Bayesian epistemology). I think editing this article to make it more readable is what would get more people to read it to the end, and thus understand the message you’re trying to impart.
Have you looked at cognitive science before? I haven’t looked at it extremely deeply, but I think it can offer routes to empirically based insights into how the human mind-brain operates and functions. However, to unify the insights cognitive science can offer with human consciousness and other difficult issues like a longing for meaning is a whole other set of very hard problems.
I haven’t read a lot about it, ‘but this seems related to a kind of problem in philosophy that I know as ‘grounding problems’. E.g., the question of ‘how do we ground truth?’ On Wikipedia, the article I found to describe it calls it the symbol grounding problem. On the Stanford Encyclopedia of Philosophy, this kind of problem are known as problems of metaphysical grounding. For rationalists, one application of the question of metaphysical grounding is to what makes propositions true. That constitutes my reading on the subject, but those links should provide further reading resources. Anyway, the connection between the question of how to ground knowledge, and this post, is that if knowledge can’t be grounded, it seems by default it can only be circularly justified. Another way to describe this issue is to see it as a proposition that all worldviews entail some kind of dogma to justify their own knowledge claims.
When he wrote:
Claims to the contrary are either obvious nonsense, or marketing copy by the same people who brought you the obvious nonsense.
In most contexts when language liked this is used, it’s usually pretty clear that you are implying someone is doing something closer to deliberately lying than some softer kind of deception. I am aware Ben might have some model about how Givewell or others in EA are acting in bad faith in some other manner, involving self-deception. If that is what he is implying that Givewell or Good Ventures are doing instead of deliberately lying, that isn’t clear from the OP. He could have also stated the organizations in question are not fully aware they’re just marketing obvious nonsense, and had been immune to his attempts to point this out to them. If that is the case, but he didn’t state that in the OP either.
So, based on their prior experience, I believe it would appear to many people like he was implying Givewell, Good Ventures, and EA are deliberately lying. Deliberate lying is generally seen as a bad thing. So, to imply someone is deliberately lying seems to clearly be an attribution of bad motives to others. So if Ben didn’t expect or think that is how people would construe part of what he was trying to say, I don’t know what he was going for.
I will take a look at them. Thanks.