The Motivated Reasoning Critique of Effective Altruism

Link post

I sketch out a plausible critique of effective altruism based on priors of how commonplace motivated reasoning is in the world at large, updated priors based on specific features of the effective altruism community, and some empirical evidence of observed motivated reasoning in our community (keeping in mind both that this is selection biased by my desire to find evidence but also by it being hard to detect motivated reasoning while you’re in the middle of a situation).

I recommend keeping comments in the EA Forum if possible, to centralize the conversation.

___

Epistemic status: Half-baked at best

I have often been skeptical of the value of a) critiques against effective altruism and b) fully general arguments that seem like they can apply to almost anything. However, as I am also a staunch defender of hypocrisy, I will now hypocritically attempt to make the case for applying a fully general critique to effective altruism.

In this post, I will claim that:

  1. Motivated reasoning inhibits our ability to acquire knowledge and form reasoned opinions.

  2. Selection bias in who makes which arguments significantly exacerbates the problem of motivated reasoning

  3. Effective altruism should not be assumed to be above these biases. Moreover, there are strong reasons to believe that incentive structures and institutions in effective altruism exacerbate rather than alleviate these biases.

  4. Observed data and experiences in effective altruism support this theory; they are consistent with an environment where motivated reasoning and selection biases are rampant.

  5. To the extent that these biases (related to motivated reasoning) are real, we should expect the harm done to our ability to form reasoned opinions to also seriously harm the project of doing good.

I will use the example of cost-effectiveness analyses as a jumping board for this argument. (I understand that effective altruism, especially outside of global health and development, has largely moved away from explicit expected value calculations and cost-effectiveness analyses. However, I do not believe this change invalidates my argument (see Appendix B)).

I also list a number of tentative ways to counteract motivated reasoning and selection bias in effective altruism:

  1. Encourage and train scientific/​general skepticism in EA newcomers.

  2. Try marginally harder to accept newcomers, particularly altruistically motivated ones with extremely high epistemic standards

  3. As a community, fund and socially support external (critical) cost-effectiveness analyses and impact assessments of EA orgs

  4. Within EA orgs, encourage and reward dissent of various forms

  5. Commit to individual rationality and attempts to reduce motivated reasoning

  6. Maybe encourage a greater number of people to apply and seriously consider jobs outside of EA or EA-adjacent orgs

  7. Maintain or improve the current culture of relatively open, frequent, and vigorous debate

  8. Foster a bias towards having open, public discussions of important concepts, strategies, and intellectual advances

Motivated reasoning: What it is, why it’s common, why it matters

By motivated reasoning, I roughly mean what Julia Galef calls “soldier mindset” (H/​T Rob Bensinger):

In directionally motivated reasoning, often shortened to “motivated reasoning”, we disproportionately put our effort into finding evidence/​reasons that support what we wish were true.

Or, from Wikipedia:

emotionally biased reasoning to produce justifications or make decisions that are most desired rather than those that accurately reflect the evidence

I think motivated reasoning is really common in our world. As I said in a recent comment:

My impression is that my interactions with approximately every entity that perceives themself as directly doing good outside of EA* is that they are not seeking truth, and this systematically corrupts them in important ways. Non-random examples that come to mind include public health (on covid, vaping, nutrition), bioethics, social psychology, developmental econ, climate change, vegan advocacy, religion, US Democratic party, and diversity/​inclusion. Moreover, these problems aren’t limited to particular institutions: these problems are instantiated in academia, activist groups, media, regulatory groups and “mission-oriented” companies.

What does motivated reasoning look like in practice? In the field of cost-effectiveness analyses, it might look like this comment on a blog post about scientific conflicts of interest:

Back in the 90’s I did some consulting work for a startup that was developing a new medical device. They were honest people–they never pressured me. My contract stipulated that I did not have to submit my publications to them for prior review. But they paid me handsomely, wined and dined me, and gave me travel opportunities to nice places. About a decade after that relationship came to an end, amicably, I had occasion to review the article I had published about the work I did for them. It was a cost-effectiveness analysis. Cost-effectiveness analyses have highly ramified gardens of forking paths that biomedical and clinical researchers cannot even begin to imagine. I saw that at virtually every decision point in designing the study and in estimating parameters, I had shaded things in favor of the device. Not by a large amount in any case, but slightly at almost every opportunity. The result was that my “base case analysis” was, in reality, something more like a “best case” analysis. Peer review did not discover any of this during the publication process, because each individual estimate was reasonable. When I wrote the paper, I was not in the least bit aware that I was doing this; I truly thought I was being “objective.”

Importantly, motivated reasoning is often subtle and insidious. In the startup consultant’s case above, any given choice or estimate in the cost-effectiveness analysis seems reasonable, but the balance of all the concerns together became very improbable (“at virtually every decision point in designing the study and in estimating parameters, I had shaded things in favor of the device”).

You might think that you’re immune to such biases, or at least not very affected by them, as an EA who wants to do good and cares a lot about the truth, as a person who thinks hard about reasoning, or even as someone who read Scout Mindset and/​or The Sequences and/​or other texts that exhort the harms of motivated reasoning. But I think this view is too cavalier and misses the point.

Again, motivated reasoning often doesn’t look like motivated reasoning externally, and it certainly doesn’t feel like motivated reasoning from the inside. To slightly misquote the character Wanda from Bojack Horseman:

You know, it’s funny, when you look at {an EA org, your own research} with rose-colored glasses, all the red flags just look like flags.

Photo credit: Bojack Horseman on Netflix

Selection bias in who makes which arguments significantly exacerbates the problem of motivated reasoning

The problem of motivated reasoning doesn’t just stop at the individual level. At a collective level, even if your own prior beliefs are untainted by motivated reasoning (e.g. because you don’t care about the results at all) your information environment is adversarially selected by who holds which opinions and who chooses to voice them.

For example, I was chatting with a friend who works in cryptocurrency trading, and he pointed out that the business propositions of pretty much all of the startups joining this space only really make sense if you think bitcoin (BTC) will go up by >5x (or at least assign sufficiently high probability to BTC going up by ~5x or more). Thus, even if you think everybody individually has unbiased estimates of the value of BTC (a big “if”!), nonetheless, the selection of people working in this space will basically only include people who are very optimistic (relative to otherwise identical peers) about the future of bitcoin.

Photo credit: Lizka

Similarly, studies about medical interventions or social psych will be selectively biased by being more likely to be conducted by people who believe in them (“experimenter effects”), analysis about climate change (or other cause areas) will be selectively conducted by people who think climate change (or other cause areas) are unusually important and tractable, etc.

Note that the problem is not just an issue of who holds which beliefs but also who chooses to voice them. Suppose for the sake of the argument that 100 smart people initially have unbiased (but noisy) priors about whether cryptocurrency is valuable. If our beliefs about cryptocurrency were formed by an unweighted poll, we may hope to take advantage of crowd wisdom and get, if not true, then at least unbiased beliefs about cryptocurrency[1]. But instead, the only beliefs you’re likely to hear are from true believers (and a few curmudgeons with their own idiosyncratic biases), which sharply biases your views (unless you have a very careful search).

Photo credit: Lizka

Similarly, consider again the medical device startup consultant’s case above. Suppose we’re trying to decide whether the medical device is cost-effective and we read 5 different cost-effectiveness analyses (CEAs). Then, “the process works” if many people have different biases (including but not limited to motivated reasoning) but these biases are uncorrelated with each other. But this is probably not what happens. Instead, we are much more likely to read studies that are motivated (whether by funding or by ideology) by people with sharp and unusual prior beliefs about the effectiveness of such a device.

Aside on how “enemy action” can exacerbate perfectly innocent selection bias.

Suppose there are “innocent” reasoners for specific questions, that is, people who are not ideologically or otherwise motivated by the question at hand, and independently come up with unbiased (but high variance) analyses of a given issue. In a naive epistemic environment, we’ll hear all of these analyses (or a random selection of them) and our collective epistemic picture will be ideologically unbiased (though of course can still be wrong because of variance or other issues).

But our epistemic environment is often not naive. Instead, it’s selection biased by funding that makes certain more profitable opinions be more public (as with the startup CEA example above), by publication norms that makes surprising and/​or ideologically soothing “discoveries″ more likely to be published, by media reporting, by hiring (including tenure) in academia and think tanks, and so forth.

A toy model to ponder is considering a situation where money tends to increase political success and all political donations are anonymous. I claim that even if the politicians do not do anything untoward, it is sufficiently concerning merely if a) there are initially differing opinions on a range of political issues and b) money differentially helps certain candidates succeed or have their voices amplified. These factors combined would effectively result in regulatory capture, without any specific individual doing anything obviously wrong.

Effective altruism should not be assumed to be above these biases

The following sections pertain mainly to a) longtermism, b) community building, c) prioritization of new interventions and/​or causes, and to a lesser extent, d) animal welfare. I have not recently followed the EA global poverty space enough to weigh in on that front, but I would guess these biases apply (to a lesser degree) there as well. Unfortunately, I am of course not an expert on a)-d) either.

Perhaps you might consider these biases (motivated reasoning and argument-generation selection bias) an unfortunate state of affairs for the world at large, but not a major problem for effective altruism. I think the strongest argument against that perspective is to consider that, before you look at the data (next section), our broad prior should very strongly be that these are large issues. Further, if we take into account specific features of effective altruism, we should probably become more worried, not less.

First, consider what exacerbates motivated reasoning. If I were to hypothesize a list of criteria for common features of directionally motivated reasoning, especially within communities, I’d probably include features like:

  1. Strong ideological reasons to believe in a pre-existing answer before searching further (consider mathematical modeling of climate change or coronavirus lockdowns vs pure mathematics)

  2. Poor/​infrequent feedback loops and low incentives to arrive at the truth (consider PR/​brand consulting vs sales[2])

  3. A heavily insular group without much contact with sufficiently high-status outsiders who have dissenting opinions (consider a presidential cabinet vs a parliamentary deliberative body)

Unfortunately, effective altruism is on the wrong side of all these criteria.

At this point, astute readers may have noticed that my list is itself not balanced. I did not include features that look favorable for effective altruism. For example our culture of relatively open, frequent, and vigorous debate. However, at least among important+obvious features I can easily generate after a quick 10-20 minutes of introspection, I think the balance of features makes EA look worse rather than better. Readers may be interested in generating their own lists and considering this situation for themselves.

We now consider selection bias. I think it is relatively uncontroversial that the current composition of EA suffers from selection effects, and this is true since the very beginning (possible search terms include “EA monoculture” and “diversity and inclusion in effective altruism”). The empirics of the situation are rarely debated. Instead, there is a robust secondary literature on whether and to what degree specific axes of diversity (e.g. talent, opinion, experience, appearance) are problems, and whether and to what degree specific proposed solutions are useful.

I will not venture a position on the overall debate here. However, I will note that selection effects in EA’s composition are not necessarily much evidence for selection bias in EA’s conclusions. For example, if you were to learn that almost all EA organizational leaders have the same astrological sign, you should not then make a strong update towards EA organization’s cause prioritization being heavily selection biased, as horoscopes and birthdays are not known to be related to cause prioritization. To argue that selection effects may have important selection biases in our conclusions, we should probably believe that these effects are upstream of differing conclusions. [3]

I will venture a specific selection effect to consider: that most prominent arguments we hear in EA are made by people who work in EA or EA-aligned orgs, or people in our close orbit. For example, consider the question of the value of working in EA orgs. In addition to the usual issues of motivated reasoning (people would like to believe that their work and those of their friends are important), there are heavy selection biases in who chooses to work in EA orgs. Akin to the cryptocurrency example above, EA orgs are primarily staffed by true believers of EA org work! For example, the largest and loudest purveyor of EA career advice is staffed by people who work in an EA org, and unsurprisingly comes to the conclusion that work in an EA organization is very impactful.

(I find this issue a hard one to consider, as I inside-view strongly buy that much EA org work is quite valuable, mostly through what I perceive to be an independent assessment, and have said things to that effect. Nonetheless, it would be collectively dishonest for our community to not collectively acknowledge this significant selection bias in who makes which arguments, and how often they say it)

We cannot rule out motivated reasoning and selection bias being common in EA

Theory aside, should we be worried about these biases in practice? That is, does the data confirm that these biases are common and pernicious?

Unfortunately, to give a full treatment of this issue, one would need to do a careful, balanced, and comprehensive look at the data for all (or a representative sample of) the cost-effectiveness analysis or other arguments in EA. Due to time constraints, I am far from able to give a fully justified treatment here. Instead, I will argue a much weaker claim: that the limited data I’ve looked at so far is consistent with a world where motivated reasoning and selection bias are common in EA arguments in practice. In Bayesian terms, I’m trying to answer P(Evidence|Hypothesis) and not P(Hypothesis|Evidence).

Recall again the definition of motivated reasoning:

reasoning to produce justifications or decisions that are most desired rather than those that accurately reflect the evidence

In worlds where motivated reasoning is commonplace, we’d expect to see:

  1. Red-teaming will discover errors that systematically slant towards an organization’s desired conclusion.

  2. Deeper, more careful reanalysis of cost-effectiveness or impact analyses usually points towards lower rather than higher impact.

In other words, error alone is not evidence for motivated reasoning. Motivated reasoning (especially frequent motivated reasoning) instead implies that initial estimates are biased (in the statistical sense) estimates.

Let’s consider a few cause areas in EA to see whether the data is consistent with the motivated reasoning hypothesis.

Meta-EA:

EA orgs, including my own (Rethink Priorities), frequently do internal cost-effectiveness analyses (CEAs) or looser and more qualitative “impact assessments.” I don’t think I’ve read any of them in careful detail, so I don’t have definitive evidence of motivated reasoning, but the following seems consistent with a motivated reasoning world:

  1. To the best of my knowledge, internal CEAs rarely if ever turn up negative. I.e., people almost never say after evaluation that the org’s work isn’t worth the money or staff time.

    1. (Un?)fortunately, the existing evidence is also consistent with a world where EA orgs do end up doing unusually impactful work. However, the observed evidence does not preclude heavy motivated reasoning, at least without a much more careful look.

  2. There are few if any careful and public evaluations of meta-EA work a) in general or b) by people not connected via funding or social connections to the specific EA orgs being evaluated.

Some more loose evidence (note I have not read their impact assessments carefully, also note that at least in Giving What We Can’s case, their direction and leadership has substantially changed since the quoted impact assessment was calculated):

  • 80,000 Hours was somewhat credulous in their initial evaluation of the expected counterfactual strength of career changes.

Ajeya Cotra – a senior research analyst at Open Philanthropy – followed up with some people who made some of the top plan changes mentioned in our 2018 review, and found that when asked more detailed questions about the counterfactual (what would have happened without 80,000 Hours), some of them reported a significantly smaller role for 80,000 Hours than what we claimed in our evaluation.

  • The median case of Giving What We Can If (2015)’s “realistic impact calculation” has three large issues

    • The impact of future donations are time-discounted using numbers from UK Green Book (3.5%), but

      • EAs can’t borrow at anywhere near 3.5%

      • we can get higher expected returns from the stock market, especially with leverage and

      • Around the same time the impact calculation was made(2015), EA orgs were estimating (arguably correct, in retrospect) a ~10-20% implicit discount rate (from MIRI, but I’ve seen similar numbers from other orgs) for donating now to them vs later

      • So overall, 3.5% seems like a suspiciously low discount rate for donations in 2015.

    • The impact of GWWC is discounted by a) counterfactual impact of people maybe donating even without the pledge and b) by the raw attrition rate

      • But the two interface poorly: the raw attrition rate is likely an underestimate of the marginal attrition rate of people persuaded to donate by Giving What We Can!

    • Giving What We Can estimates an annual attrition rate of 5%, which in retrospect is overly optimistic

(H/​T Alexander Gordon-Brown for both examples, some details are filled in by me)

More broadly, I’m moderately concerned that insufficient attention is paid to people’s likely counterfactuals. In (meta-)EA, it is often implicitly assumed that career plan/​donation changes are either positive or neutral as long as the changes are a) actions consistent broadly with EA and b) carefully considered.

I agree that this might be what we think in expectation, but I think reality has a lot of noise, and we should only be at most 75% or so confident that meta-org inspired changes for any given individual is actually positive, which cuts expected impact of orgs by another factor of 2 or so (made-up numbers).

cf KnowYourMeme

Animals

When I look at more recent criticisms of Animal Charity Evaluator (ACE)’s cost-effectiveness analyses (e.g., Halstead (2018)), I think motivated reasoning is a very plausible explanation. In particular, the observed data (errors much more biased towards higher rather than lower estimates) is consistent with a world where ACE researchers really wanted animal charities to have a very high impact. Now this was written in 2018, and hopefully ACE has improved since then, so it feels unfair to penalize ACE too much for past mistakes. Nonetheless, from a purely forecasting or Bayesian perspective, the past is a good and mostly unbiased predictor of the future, so we should not assume that ACE has improved a lot in research quality before we get sufficient evidence to that effect.

Similarly, the Good Food Institute (GFI), an alternative proteins research/​advocacy org, has recently (2021) funded a mission-aligned consultancy to do what’s called a “techno-economic analysis (TEA)” for the feasibility of mass-produced cultured meat, which attracted a ton of external attention (e.g, 40k karma on reddit). The result unsurprisingly came out to be much more positive than an earlier, more careful, analysis from a more skeptical source directly funded by Open Phil. This seems consistent with a story of both motivated reasoning and selection bias.

Longtermism:

I haven’t done an exhaustive search, but I’m not aware of many cost-effectiveness analyses in this space (despite nominally working in this area). One of the few commendable exceptions I’m aware of is ​​Alliance to Feed the Earth in Disasters (ALLFED)’s own cost-effectiveness analysis.

A moderately rigorous review by an unconnected third party resulted in noticeably lower numbers for cost-effectiveness. This appears consistent with a story of motivated reasoning, where “Deeper, more careful reanalysis of cost-effectiveness or impact analyses usually points towards lower rather than higher impact.”

This subsection may seem shorter and less damning than the other subsections, but I note this is more due to lack of data than to active evidence against motivated reasoning. I consider the lack of cost-effectiveness analyses a bug, not a feature, as I will discuss in Appendix B.

New causes:

When Giving Green launched, it very quickly became an apparent EA darling. People were linking the post a lot, some people were making donations according to Giving Green’s recommendations, and peripheral EA orgs like High Impact Athletes were recommending Giving Green. Giving Green even got positive reception in the Atlantic and the EA-aligned Vox vertical Future Perfect.

As far as I can tell, Giving Green’s research quality was mediocre at best. As this critique puts it:

What this boils down to is that in every case where I investigated an original recommendation made by Giving Green, I was concerned by the analysis to the point where I could not agree with the recommendation.

(NB: I did not independently evaluate the original sources.)

Note that an org starting out and making mistakes while finding its feet is not itself a large issue. I’m a big fan of experimentation and trying new and hard things! But the initially uncritical acceptance of the EA community is suspicious: I do not believe this error is random.

Neither the existence of errors nor the credulity of accepting errors at face value is itself strong evidence of motivated reasoning. For the motivated reasoning case to stick, we need to believe in e.g., personal, ideological, etc. biases for wanting to believe in the results of certain analysis and thus being less likely to check your work when the “results” agree with your predetermined conclusions (cf. motivated stopping, isolated demands for rigor), or on a community level, the same thing happening where the group epistemic environment is more conducive to believing new arguments or evidence that are ideologically favorable or otherwise palatable (e.g. having a climate change charity to donate to that doesn’t sound “weird”).

Unfortunately, I believe both things (motivated reasoning in the research conclusions and in the community’s easy acceptance of them) have happened here.

(An aside: I’m out-of-the-loop enough and I don’t have direct evidence of these problems (motivated reasoning and selection bias) being significant in the global health and development space. However, I note that when some developmental economists venture out to do something new in climate change, these problems immediately rear up. This to me is moderate evidence for motivated reasoning and selection bias also being rampant in that cause area. I also think there is motivated reasoning in human neartermism’s marketing/​PR (e.g. here). Motivated reasoning in marketing/​PR is not itself proof of bad research or low epistemic quality, but it is indicative. So I’d be quite surprised if the global health and development space is immune to such worries.)

Why motivated reasoning and selection bias are bad for effective altruism

Truth is really important to the project of doing lots and lots of impartial good, and motivated reasoning harms truth. For example, see this post by Stefan Schubert about why truth-seeking is especially important for utilitarianism:

Once we turn to application, truth-seeking looms large. Unlike many other ethical theories, and unlike common-sense morality, utilitarianism requires you maximise positive impact. It requires you to advance the well-being of all as effectively as possible. How to do that best is a complex empirical question. You need to compare actions and causes which are fundamentally different. Investments in education need to be compared with malaria prevention. Voting reform with climate change mitigation. Prioritising between them is a daunting task.

And it’s made even harder by utilitarian impartiality. It’s harder to estimate distant impact than to estimate impact on those close to us. So the utilitarian view that distance is ethically irrelevant makes it even more epistemically challenging. That’s particularly true of temporal impartiality. Estimating the long-run impact of our present actions presents great difficulties.

So utilitarianism entails that we do extensive research, to find out how to maximise well-being. But it’s not enough that we put in the hours. We also need to be guided by the right spirit. There are countless biases that impede our research. We fall in love with our pet hypotheses. We refuse to change our mind. We fail to challenge the conventional wisdom of the day. We’re vain, and we’re stubborn. To counter those tendencies, utilitarians need a spirit of honest truth-seeking.

I believe a similar argument can be made for effective altruism.

That said, I want to be careful here. It’s theoretically consistent for you to believe a) truth is really important to EA and b) motivated reasoning is harmful for truth but c) motivated reasoning isn’t a big deal for the EA project.

However, I do not personally believe reasons to explain this discrepancy, whether alone or in aggregate, to be sufficiently strong. I briefly sketch out my reasoning in appendix A.

Tentative ways to counteract motivated reasoning and selection biases in effective altruism

I’m not very confident that I’ve identified the right problem, and even less confident that I could both identify the right problem and come up with the right solutions to it. (There’s a graveyard of failed attempts to solve perceived systematic problems in EA, and I do not view myself as unusually special). Nevertheless, here are a few attempts:

  1. Encourage and train scientific/​general skepticism in EA newcomers. Every year, we have an influx of newcomers, some of whom are committed and care about the same things we do, and some of whom are very good at reasoning, but who are on average not captivated by the same biases and sunk costs that plague veteran EAs. If we introduce them to EA through a lens of scientific and general skepticism (e.g. introductions via red-teaming, also see some thoughts from Buck about deference), we may hope to get fresh perspectives and critiques that do not share all of our biases.

  2. Try marginally harder to accept newcomers, particularly altruistically motivated ones with extremely high epistemic standards, and/​or outsiders from other backgrounds, experiences, and worldviews than is typical in EA. This can be done by directing more resources (including money, highly talented people, institutional prestige, and management capacity) towards recruiting such people. While there are other costs and benefits to EA growth, I think on balance specific types of growth are helpful for reducing our internal groupthink, motivated reasoning, and heavy selection biases in how EA conversations are engaged.

  3. As a community, fund and socially support critical and external cost-effectiveness analyses and impact assessments of EA orgs. (Cf. Jepsen for EA?) Too high a fraction of cost-effectiveness analyses and impact assessments are conducted internally by EA orgs right now (or occasionally by people in the close orbit of such orgs). We should instead have a norm where the community as a whole fund and socially support relatively independent parties (e.g. EA consultants) to do impact assessments of relatively core orgs like CEA, 80k, EA Funds, GPI, FHI, Rethink Priorities, ACE, GiveWell, Founder’s Pledge, Effective Giving, Longview, CHAI, CSET, etc. (Conspicuously missing from my list is Open Phil. Unfortunately, I perceive the funding situation in EA for pretty much all orgs and individual researchers to be so tied to Open Phil that I do not think it’s realistic to expect to see independent/​unmotivated analyses or critiques of Open Phil.)

  4. Within EA orgs, encourage and reward dissent of various forms, including critical in-house reviews of various research and strategy docs. Rethink Priorities has a fairly strong/​critical internal review culture for research, and I’ve benefited a lot from the (sometimes harsh, usually fair) reviews of my own writings. Other orgs should probably consider this as well if they don’t already do so. That said, I expect org employees to have strategic blindspots about macro-level issues with e.g., their org’s theories of change, so while internal dissent, disagreement, review and criticism may improve research quality and microstrategy, it should be insufficient to counteract org-wide motivated reasoning and/​or mistaken worldviews.

  5. Commit to individual rationality and attempts to reduce motivated reasoning. Notes on how to reduce motivated reasoning include Julia Galef’s Scout Mindset (which I have not read) and my own scattered notes here. Aside from the institutional benefits, I think having less motivated reasoning is individually really helpful in improving the quality of and reducing the bias in e.g., individual research and career decisions. That said, we should be wary of surprising and suspicious convergence, and I am in general suspicious of exhortations to solving institutional epistemic issues by appealing to individual sacrifice. So while I think people should strive to reduce motivated reasoning in their own thinking/​work, I do not believe this individual solution should be a frontline solution to our collective problems.

  6. Maybe encourage a greater number of people to apply and seriously consider jobs outside of EA or EA-adjacent orgs. I think it is probably an unfortunate epistemic situation that many of our most committed members primarily work in a very small and niche set of organizations. So there’s an argument that it will be good if committed members are more willing to work elsewhere, purely to help counteract our own motivated reasoning and selection bias issues. Unfortunately, I personally strongly buy the arguments that for most people, working within core EA organizations can accomplish a lot of good relative to their likely counterfactuals, so I hesitate to broadly advise that people who can work in core EA orgs not do so.

  7. Maintain or improve the current culture of relatively open, frequent, and vigorous debate. I think this is one of the strongest reasons for why motivated reasoning hasn’t gotten much worse in our community, and it will be good to sustain it.

  8. Bias towards having open, public discussions of important concepts, strategies, and intellectual advances. Related to the above point, there may well be many good specific reasons against having public discussions of internal concepts (a canonical list is here). However, subjecting our arguments/​conclusions to greater scrutiny is probably on balance helpful for a) truth-seeking b) making sure we don’t get high on our own supply and c) improving collective rather than just individual intellectual advancement (cf. The Weapon of Openness). So all else equal, we should probably bias towards more rather than less openness, particularly for non-sensitive issues.

Thanks to Alexander Gordon-Brown, Michael Aird, Neil Dullaghan, Andrea Lincoln, Jake Mckinnon, and Adam Gleave for conversations that inspired this post. Thanks also to Charles Dillon, @mamamamy_anona, Michael St.Jules, Natalia Mendoca, Adam Gleave, David Moss, Janique Belman, Peter Wildeforde, and Lizka Vaintrob for reading and giving comments on earlier drafts of this post.

I welcome further comments, analyses, and critiques.

Appendices and Endnotes

Appendix A: Quick sketch of “If you think truth is in general necessary for EA, and you think motivated reasoning limits truth, you should be worried about motivated reasoning in EA.”

This part may sound tautological. But the argument does have holes. I do not plug all the holes here, but briefly sketch why I think the balance of considerations should strongly point in favor of this argument.

It’s theoretically consistent for you to believe a) truth is really important to EA and b) motivated reasoning is harmful for truth but c) motivated reasoning isn’t a big deal for the EA project.

For example, you may believe that

  1. Motivated reasoning is a problem for truth-seeking in EA but we have so many other problems that harm truth-seeking that motivated reasoning is not high on the list.

  2. Truth in general is really important for EA but the specific ways that motivated reasoning causes us to diverge from the truth are irrelevant to doing good.

  3. Motivated reasoning is just another bias (in the statistical sense). Once identified, you (whether individual grantmakers or the community overall) can adjust for statistical biases.

For 1), you might imagine that there are many causes of error other than motivated reasoning (for example, we might just be really bad at arithmetic for reasons unrelated to wanting certain arithmetic conclusions to be true). So if motivated reasoning is just one of many many other errors of equal import harming our truth-seeking, we should not give it special weight.

My rejoinder here is that I just think it’s empirically very implausible that there are many (say >10) other errors with equal or greater importance to motivated reasoning.

(I also place some weight on the claim that the EA movement is on average smarter than many other epistemic groups [citation needed], and most errors being lower for smarter people while motivated reasoning is uncorrelated or even positively correlated with intelligence. That said, I don’t buy this argument too strongly compared to empirical observations/​intuitions of the relevant error rates.)

For 2), you might imagine that motivated reasoning is common in EA but while truth in general is really important for EA, motivated-reasoning specific biases are nearly irrelevant (analogy: if an alternative EA movement had a pervasive bias towards rhyming statements, we would not automatically conclude that this particular rhyming bias will be massively harmful to Rhyme EA’s epistemology).

I just find this very very implausible since motivated reasoning seems to cut at the critical points of importance in EA—impartially evaluating donation and career opportunities, and other “big deals” for large groups of moral patients. So I just don’t think it’s plausible that motivated reasoning isn’t unusually harmful for the prospect of doing impartial good, never mind that it’s unusually benign.

For 3), you may think that commonplace motivated reasoning is just a predictable feature of the world, and like other predictable features good Bayesians learn to adjust for it and have a fairly accurate view of the truth regardless.

I think this is probably the strongest argument against this section of my post. However, I think while it attenuates the strength of my post, this doesn’t counteract it because to believe adjustment is enough this assumes that the degree of bias is uniform or at least highly predictable. I do think there’s a real effect from adjustments that mitigate the harms of motivated reasoning somewhat but a) such adjustments are costly in effort and b) the degree of bias is not infinitely predictable, so we’re likely somewhat less collectively accurate due to motivated reasoning, in both theory and practice.

That said, I have not constructed an exhaustive list of reasons why you may believe that motivated reasoning is both common and in practice not a big deal for EA (comments welcome!). Nonetheless, I will contend that it would be surprising, in worlds where motivated reasoning and selection bias is common in EA, for such biases to turn out to actually not be that big a deal.

Appendix B: Not using cost-effectiveness analyses does not absolve you of these problems

Most of my examples above have been made with reference to cost-effectiveness analyses.

Now, the EA movement has largely moved away from explicit cost-effectiveness analyses in recent years, especially outside the global health and development space. For example, in ACE’s charity evaluation criteria, “cost effectiveness” is only 1 of things they look for when evaluating a charity.

Similarly, in longtermism, a recent post argued forcibly against the use of cost-effectiveness/​expected value calculations for longtermism, saying:

Expected value calculations[1], the favoured approach for EA decision making, are all well and good for comparing evidence backed global health charities, but they are often the wrong tool for dealing with situations of high uncertainty, the domain of EA longtermism.

Most of the posts’ comments were critical, but they didn’t positively argue against EV calculations being bad for longtermism. Instead they completely disputed that EV calculations were used in longtermism at all!

(To check whether my general impressions from public EA conversations matches private work, I briefly discussed with 3 EA grantmakers about what they see as the role of cost-effectiveness analyses in their own work. What they said is consistent with the picture painted above).

So overall it doesn’t appear that explicit cost-effectiveness analyses (outside of neartermism) are used much in EA anymore. Instead, decisions within effective altruism seem primarily to be made with reference to good judgment, specific contextual factors, and crucial considerations.

I’m sure there are very good reasons (some stated, some unstated) for moving away from cost-effectiveness analysis. But I’m overall pretty suspicious of the general move, for a similar reason that I’d be suspicious of non-EAs telling me that we shouldn’t use cost-effectiveness analyses to judge their work, in favor of say systematic approaches, good intuitions, and specific contexts like lived experiences (cf. Beware Isolated Demands for Rigor):

I’m sure you have specific arguments for why in your case quantitative approaches aren’t very necessary and useful, because your uncertainties span multiple orders of magnitude, because all the calculations are so sensitive to initial assumptions, and so forth. But none of these arguments really point to verbal heuristics suddenly (despite approximately all evidence and track records to the contrary) performing better than quantitative approaches.

In addition to the individual epistemic issues with verbal assessments unmoored by numbers, we also need to consider the large communicative sacrifices made by not having a shared language (mathematics) to communicate things like uncertainty and effect sizes. Indeed, we have ample evidence that switching away from numerical reasoning when communicating uncertainty is a large source of confusion.

To argue that in your specific situation, verbal judgment is superior without numbers than with numbers, never mind that your proposed verbal solutions obviates the biases associated with trying to do numerical cost-effectiveness modeling of the same, the strength of your evidence and arguments needs to be overwhelming. Instead, I get some simple verbal heuristic-y arguments, and all of this is quite suspicious.

Or more succinctly:

It’s easy to lie with numbers, but it’s even easier to lie without them

So overall I don’t think moving away from explicit expected value calculations and cost-effectiveness analyses is much of a solution, if at all, for motivated reasoning or selection biases in effective altruism. Most of what it does is makes things less grounded in reality, less transparent and harder to critique (cf. “Not Even Wrong”).

Endnotes

[1] Examples in this post are implicitly critical of the epistemics of cryptocurrency optimists, but in the interest of full disclosure, >>90% of my own current net worth is in cryptocurrency (long story).

[2] Both are “social” tasks but sales has a clear deliverable, good incentives, and feedback loops. It is much more ambiguous what it means to successfully maintain a brand or reputation.

[3] A reviewer noted one potential complication: if you find that EA has a highly suspicious compositional bias, then this is some reason to be suspicious about bias in our conclusions, even if the compositional bias itself seems unrelated to our conclusions. e.g. if all EA leaders have red hair, even if you are confident red hair does not influence conclusions, it is evidence that there was some other selection effect that may be related to conclusions.

Of course it’s possible that “highly-suspicious” compositional biases were arrived at by chance. Maybe what matters more is how probable it is that the observed bias (e.g. all red hair) represents a real source of selection bias (e.g. all the staff are all part of a particular pre-existing red-hair heavy social network) rather than just random chance, and that the selection bias be probably related to something epistemic.