Compartmentalizing: Effective Altruism and Abortion

Cross-posted on my blog and the effec­tive al­tru­ism fo­rum with some minor tweaks; apolo­gies if some of the for­mat­ting hasn’t copied across. The ar­ti­cle was writ­ten with an EA au­di­ence in mind but it is es­sen­tially one about ra­tio­nal­ity and con­se­quen­tial­ism.

Sum­mary: Peo­ple fre­quently com­part­men­tal­ize their be­liefs, and avoid ad­dress­ing the im­pli­ca­tions be­tween them. Or­di­nar­ily, this is per­haps in­nocu­ous, but when the both ideas are highly morally im­por­tant, their in­ter­ac­tion is in turn im­por­tant – many stan­dard ar­gu­ments on both sides of moral is­sues like the per­mis­si­bil­ity of abor­tion are sig­nifi­cantly un­der­mined or oth­er­wise effected by EA con­sid­er­a­tions, es­pe­cially moral un­cer­tainty.

A long time ago, Will wrote an ar­ti­cle about how a key part of ra­tio­nal­ity was tak­ing ideas se­ri­ously: fully ex­plor­ing ideas, see­ing all their con­se­quences, and then act­ing upon them. This is some­thing most of us do not do! I for one cer­tainly have trou­ble. He later par­tially redacted it, and Anna has an ex­cel­lent ar­ti­cle on the sub­ject, but at the very least de­com­part­men­tal­iz­ing is a very stan­dard part of effec­tive al­tru­ism.

Similarly, I think peo­ple se­lec­tively ap­ply Effec­tive Altru­ist (EA) prin­ci­ples. Peo­ple are very will­ing to ap­ply them in some cases, but when those prin­ci­ples would cut at a core part of the per­son’s iden­tity – like re­quiring them to dress ap­pro­pri­ately so they seem less weird – peo­ple are much less will­ing to take those EA ideas to their log­i­cal con­clu­sion.

Con­sider your per­sonal views. I’ve cer­tainly changed some of my opinions as a re­sult of think­ing about EA ideas. For ex­am­ple, my opinion of bed­net dis­tri­bu­tion is now much higher than it once was. And I’ve learned a lot about how to think about some tech­ni­cal is­sues, like re­gres­sion to the mean. Yet I re­al­ized that I had rarely done a full 180 – and I think this is true of many peo­ple:

  • Many think EA ideas ar­gue for more for­eign aid – but did any­one come to this con­clu­sion who had pre­vi­ously been pas­sion­ately anti-aid?

  • Many think EA ideas ar­gue for veg­e­tar­i­anism – but did any­one come to this con­clu­sion who had pre­vi­ously been pas­sion­ately car­nivorous?

  • Many think EA ideas ar­gue against do­mes­tic causes – but did any­one come to this con­clu­sion who had pre­vi­ously been a pas­sion­ate na­tion­al­ist?

Yet this is quite wor­ry­ing. Given the power and scope of many EA ideas, it seems that they should lead to peo­ple chang­ing their mind on is­sues were they had been pre­vi­ously very cer­tain, and in­deed emo­tion­ally in­volved.

Ob­vi­ously we don’t need to ap­ply EA prin­ci­ples to ev­ery­thing – we can prob­a­bly con­tinue to brush our teeth with­out need for much re­flec­tion. But we prob­a­bly should ap­ply them to is­sues with are seen as be­ing very im­por­tant: given the im­por­tance of the is­sues, any im­pli­ca­tions of EA ideas would prob­a­bly be im­por­tant im­pli­ca­tions.

Mo­ral Uncertainty

In his PhD the­sis, Will MacAskill ar­gues that we should treat nor­ma­tive un­cer­tainty in much the same way as or­di­nary pos­i­tive un­cer­tainty; we should as­sign cre­dences (prob­a­bil­ities) to each the­ory, and then try to max­imise the ex­pected moral­ity of our ac­tions. He calls this idea ‘max­imise ex­pected choice-wor­thi­ness’, and if you’re into philos­o­phy, I recom­mend read­ing the pa­per. As such, when de­cid­ing how to act we should give greater weight to the the­o­ries we con­sider more likely to be true, and also give more weight to the­o­ries that con­sider the is­sue to be of greater im­por­tance.

This is im­por­tant be­cause it means that a novel view does not have to be to­tally per­sua­sive to de­mand our ob­ser­vance. Con­sider, for ex­am­ple, veg­e­tar­i­anism. Maybe you think there’s only a 10% chance that an­i­mal welfare is morally sig­nifi­cant – you’re pretty sure they’re tasty for a rea­son. Yet if the con­se­quences of eat­ing meat are very bad in those 10% of cases (mur­der or tor­ture, if the an­i­mal rights ac­tivists are cor­rect), and the ad­van­tages are not very great in the other 90% (tasty, some nu­tri­tional ad­van­tages), we should not eat meat re­gard­less. Tak­ing into ac­count the size of the is­sue at stake as well as prob­a­bil­ity of its be­ing cor­rect means pay­ing more re­spect to ‘minor­ity’ the­o­ries.

And this is more of an is­sue for EAs than for most peo­ple. Effec­tive Altru­ism in­volves a group of novel moral pre­misses, like cos­mopoli­tanism, the moral im­per­a­tive for cost-effec­tive­ness and the im­por­tance of the far fu­ture. Each of these im­ply that our de­ci­sions are in some way very im­por­tant, so even if we as­sign them only a small cre­dence, their plau­si­bil­ity im­plies rad­i­cal re­vi­sions to our ac­tions.

One is­sue that Will touches on in his the­sis is the is­sue of whether fe­tuses morally count. In the same way that we have moral un­cer­tainty as to whether an­i­mals, or peo­ple in the far fu­ture, count, so too we have moral un­cer­tainty as to whether un­born chil­dren are morally sig­nifi­cant. Yes, many peo­ple are con­fi­dent they know the cor­rect an­swer – but there many of these on each side of the is­sue. Given the de­gree of dis­agree­ment on the is­sue, among philoso­phers, poli­ti­ci­ans and the gen­eral pub­lic, it seems like the perfect ex­am­ple of an is­sue where moral un­cer­tainty should be taken into ac­count – in­deed Will uses it as a canon­i­cal ex­am­ple.

Con­sider the case of a preg­nant women Sarah, won­der­ing whether it is morally per­mis­si­ble to abort her child1. The al­ter­na­tive course of ac­tion she is con­sid­er­ing is putting the child up for adop­tion. In ac­cor­dance with the level of so­cial and philo­soph­i­cal de­bate on the is­sue, she is un­cer­tain as to whether abort­ing the fe­tus is morally per­mis­si­ble. If it’s morally per­mis­si­ble, it’s merely per­mis­si­ble – it’s not obli­ga­tory. She fol­lows the ex­am­ple from Nor­ma­tive Uncer­tainty and con­structs the fol­low­ing table

abortion table 1

In the best case sce­nario, abor­tion has noth­ing to recom­mend it, as adop­tion is also per­mis­si­ble. In the worst case, abor­tion is ac­tu­ally im­per­mis­si­ble, whereas adop­tion is per­mis­si­ble. As such, adop­tion dom­i­nates abor­tion.

How­ever, Sarah might not con­sider this rep­re­sen­ta­tion as ad­e­quate. In par­tic­u­lar, she thinks that now is not the best time to have a child, and would pre­fer to avoid it.2 She has made plans which are in­con­sis­tent with be­ing preg­nant, and prefers not to give birth at the cur­rent time. So she amends the table to take into ac­count these prefer­ences.

abortion table 2

Now adop­tion no longer strictly dom­i­nates abor­tion, be­cause she prefers abor­tion to adop­tion in the sce­nario where it is morally per­mis­si­ble. As such, she con­sid­ers her cre­dence: she con­sid­ers the pro-choice ar­gu­ments slightly more per­sua­sive than the pro-life ones: she as­signs a 70% cre­dence to abor­tion be­ing morally per­mis­si­ble, but only a 30% chance to its be­ing morally im­per­mis­si­ble.

Look­ing at the table with these num­bers in mind, in­tu­itively it seems that again it’s not worth the risk of abor­tion: a 70% chance of sav­ing one­self in­con­ve­nience and tem­po­rary dis­com­fort is not suffi­cient to jus­tify a 30% chance of com­mit­ting mur­der. But Sarah’s un­satis­fied with this un­scien­tific com­par­i­son: it doesn’t seem to have much of a the­o­ret­i­cal ba­sis, and she dis­trusts ap­peals to in­tu­itions in cases like this. What is more, Sarah is some­thing of a util­i­tar­ian; she doesn’t re­ally be­lieve in some­thing be­ing im­per­mis­si­ble.

For­tu­nately, there’s a stan­dard tool for mak­ing in­ter-per­sonal welfare com­par­i­sons: QALYs. We can con­vert the pre­vi­ous table into QALYs, with the moral un­cer­tainty now be­ing ex­pressed as un­cer­tainty as to whether sav­ing fe­tuses gen­er­ates QALYs. If it does, then it gen­er­ates a lot; sup­pos­ing she’s at the end of her first trimester, if she doesn’t abort the baby it has a 98% chance of sur­viv­ing to birth, at which point its life ex­pec­tancy is 78.7 in the US, for 78.126 QALYs. This calcu­la­tion as­sumes as­signs no QALYs to the fe­tus’s 6 months of ex­is­tence be­tween now and birth. If fe­tuses are not wor­thy of eth­i­cal con­sid­er­a­tion, then it ac­counts for 0 QALYs.

We also need to as­sign QALYs to Sarah. For an up­per bound, be­ing preg­nant is prob­a­bly not much worse than hav­ing both your legs am­pu­tated with­out med­i­ca­tion, which is 0.494 QALYs, so lets con­ser­va­tively say 0.494 QALYs. She has an ex­pected 6 months of preg­nancy re­main­ing, so we di­vide by 2 to get 0.247 QALYs. Women’s Health Magaz­ine gives the odds of ma­ter­nal death dur­ing child­birth at 0.03% for 2013; we’ll round up to 0.05% to take into ac­count risk of non-death in­jury. Women at 25 have a re­main­ing life ex­pec­tancy of around 58 years, so thats 0.05%*58= 0.029 QALYs. In to­tal that gives us an es­ti­mate of 0.276 QALYs. If the baby doesn’t sur­vive to birth, how­ever, some of these costs will not be in­curred, so the truth is prob­a­bly slightly lower than this. All in all a 0.276 QALYs seems like a rea­son­ably con­ser­va­tive figure.

Ob­vi­ously you could re­fine these num­bers a lot (for ex­am­ple, years of old age are likely to be at lower qual­ity of life, there are some med­i­cal risks to the mother from abort­ing a fe­tus, etc.) but they’re plau­si­bly in the right bal­l­park. They would also change if we used in­her­ent tem­po­ral dis­count­ing, but prob­a­bly we shouldn’t.

abortion table 3

We can then take into ac­count her moral un­cer­tainty di­rectly, and calcu­late the ex­pected QALYs of each ac­tion:

  • If she aborts the fe­tus, our ex­pected QALYs are 70%x0 + 30%(-78.126) = −23.138

  • If she car­ries the baby to term and puts it up for adop­tion, our ex­pected QALYs are 70%(-0.247) + 30%(-0.247) = −0.247

Which again sug­gests that the moral thing to do is to not abort the baby. In­deed, the life ex­pec­tancy is so long at birth that it quite eas­ily dom­i­nates the calcu­la­tion: Sarah would have to be ex­tremely con­fi­dent in re­ject­ing the value of the fe­tus to jus­tify abort­ing it. So, mind­ful of over­con­fi­dence bias, she de­cides to carry the child to term.

In­deed, we can show just how con­fi­dent in the lack of moral sig­nifi­cance of the fe­tuses one would have to be to jus­tify abort­ing one. Here is a sen­si­tivity table, show­ing cre­dence in moral sig­nifi­cance of fe­tuses on the y axis, and the di­rect QALY cost of preg­nancy on the x axis for a wide range of pos­si­ble val­ues. The di­rect QALY cost of preg­nancy is ob­vi­ously bounded above by its limited du­ra­tion. As is im­me­di­ately ap­par­ent, one has to be very con­fi­dent in fe­tuses lack­ing moral sig­nifi­cance, and preg­nancy has to be very bad, be­fore abort­ing a fe­tus be­comes even slightly QALY-pos­i­tive. For mod­er­ate val­ues, it is ex­tremely QALY-nega­tive.

abortion table 4

Other EA con­cepts and their ap­pli­ca­tions to this issue

Of course, moral un­cer­tainty is not the only EA prin­ci­ple that could have bear­ing on the is­sue, and given that the theme of this blog­ging car­ni­val, and this post, is things we’re over­look­ing, it would be re­miss not to give at least a broad overview of some of the oth­ers. Here, I don’t in­tend to judge how per­sua­sive any given ar­gu­ment is – as we dis­cussed above, this is a de­bate that has been go­ing with­out set­tle­ment for thou­sands of years – but merely to show the ways that com­mon EA ar­gu­ments af­fect the plau­si­bil­ity of the differ­ent ar­gu­ments. This is a sec­tion about the di­rec­tion­al­ity of EA con­cerns, not on the over­all mag­ni­tudes.

Not re­ally people

One of the most im­por­tant ar­gu­ments for the per­mis­si­bil­ity of abor­tion is that fe­tuses are in some im­por­tant sense ‘not re­ally peo­ple’. In many ways this ar­gu­ment re­sem­bles the anti-an­i­mal rights ar­gu­ment that an­i­mals are also ‘not re­ally peo­ple’. We already cov­ered above the way that con­sid­er­a­tions of moral un­cer­tainty un­der­mine both these ar­gu­ments, but it’s also note­wor­thy that in gen­eral it seems that the two views are mu­tu­ally sup­port­ing (or mu­tu­ally un­der­min­ing, if both are false). An­i­mal-rights ad­vo­cates of­ten ap­peal to the idea of an ‘ex­pand­ing cir­cle’ of moral con­cern. I’m skep­ti­cal of such an ar­gu­ment, but it seems clear that the larger your sphere, the more likely fe­tuses are to end up on the in­side. The fact that, in the US at least, an­i­mal ac­tivists tend to be pro-abor­tion seems to be more of a his­tor­i­cal ac­ci­dent than any­thing else. We could imag­ine al­ter­na­tive-uni­verse poli­ti­cal coal­i­tions, where a “Defend the Weak; They’re morally valuable too” party faced off against a “Ex­ploit the Weak; They just don’t count” party. In gen­eral, to the ex­tent that EAs care about an­i­mal suffer­ing (even in­sect suffer­ing ), EAs should tend to be con­cerned about the welfare of the un­born.

Not peo­ple yet

A slightly differ­ent com­mon ar­gu­ment is that while fe­tuses will even­tu­ally be peo­ple, they’re not peo­ple yet. Since they’re not peo­ple right now, we don’t have to pay any at­ten­tion to their rights or welfare right now. In­deed, many peo­ple make short sighted de­ci­sions that im­plic­itly as­sign very lit­tle value to the fu­tures of peo­ple cur­rently al­ive, or even to their own fu­tures – through self-de­struc­tive drug habits, or sim­ply failing to save for re­tire­ment. If we don’t as­sign much value to our own fu­tures, it seems very sen­si­ble to dis­re­gard the fu­tures of those not even born. And even if peo­ple who dis­re­garded their own fu­tures were sim­ply neg­li­gent, we might still be con­cerned about things like the non-iden­tity prob­lem.

Yet it seems that EAs are al­most uniquely un­suited to this re­sponse. EAs do tend to care ex­plic­itly about fu­ture gen­er­a­tions. We put con­sid­er­able re­sources into in­ves­ti­gat­ing how to help them, whether through ad­dress­ing cli­mate change or ex­is­ten­tial risks. And yet these peo­ple have far less of a claim to cur­rent per­son­hood than fe­tuses, who at least have cur­rent phys­i­cal form, even if it is diminu­tive. So again to the ex­tent that EAs care about fu­ture welfare, EAs should tend to be con­cerned about the welfare of the un­born.

Replaceability

Another im­por­tant EA idea is that of re­place­abil­ity. Typ­i­cally this arises in con­texts of ca­reer choice, but there is a differ­ent ap­pli­ca­tion here. The QALYs as­so­ci­ated with aborted chil­dren might not be so bad if the mother will go on to have an­other child in­stead. If she does, the net QALY loss is much lower than the gross QALY loss. Of course, the benefits of abort­ing the fe­tus are equiv­a­lently much smaller – if she has a child later on in­stead, she will have to bear the costs of preg­nancy even­tu­ally any­way. This re­sem­bles con­cerns that maybe sav­ing chil­dren in Africa doesn’t make much differ­ence, be­cause their par­ents ad­just their sub­se­quent fer­til­ity.

The plau­si­bil­ity be­hind this idea comes from the idea that, at least in the US, most fam­i­lies have a cer­tain ideal num­ber of chil­dren in mind, and ba­si­cally achieve this goal. As such, miss­ing an op­por­tu­nity to have an early child sim­ply re­sults in hav­ing an­other later on.

If this were fully true, util­i­tar­i­ans might de­cide that abor­tion ac­tu­ally has no QALY im­pact at all – all it does is change the timing of events. On the other hand, fer­til­ity de­clines with age, so many cou­ples plan­ning to have a re­place­ment child later may be un­able to do so. Also, some peo­ple do not have ideal fam­ily size plans.

Ad­di­tion­ally, this does not re­ally seem to hold when the al­ter­na­tive is adop­tion; pre­sum­ably a woman putting a child up for adop­tion does not con­sider it as part of her fam­ily, so her fu­ture child­bear­ing would be un­af­fected. This ar­gu­ment might hold if rais­ing the child your­self was the only al­ter­na­tive, but given that adop­tion ser­vices are available, it does not seem to go through.

Autonomy

Some­times peo­ple ar­gue for the per­mis­si­bil­ity of abor­tion through au­ton­omy ar­gu­ments. “It is my body”, such an ar­gu­ment would go, “there­fore I may do what­ever I want with it.” To a cer­tain ex­tent this ar­gu­ment is ad­dressed by point­ing out that one’s bod­ily rights pre­sum­ably do not ex­tent to kil­ling oth­ers, so if the anti-abor­tion side are cor­rect, or even have a non-triv­ial prob­a­bil­ity of be­ing cor­rect, au­ton­omy would be in­suffi­cient. It seems that if the au­ton­omy ar­gu­ment is to work, it must be be­cause a differ­ent ar­gu­ment has es­tab­lished the non-per­son­hood of fe­tuses – in which case the au­ton­omy ar­gu­ment is re­dun­dant. Yet even putting this aside, this ar­gu­ment is less ap­peal­ing to EAs than to non-EAs, be­cause EAs of­ten hold a dis­tinctly non-liber­tar­ian ac­count of per­sonal ethics. We be­lieve it is ac­tu­ally good to help peo­ple (and avoid hurt­ing them), and per­haps that it is bad to avoid do­ing so. And many EAs are util­i­tar­i­ans, for whom helping/​not-hurt­ing is not merely laud-wor­thy but ac­tu­ally com­pul­sory. EAs are gen­er­ally not very im­pressed with Ayn Rand style au­ton­omy ar­gu­ments for re­ject­ing char­ity, so again EAs should tend to be un­sym­pa­thetic to au­ton­omy ar­gu­ments for the per­mis­si­bil­ity of abor­tion.

In­deed, some EAs even think we should be legally obliged to act in good ways, whether through laws against fac­tory farm­ing or tax-funded for­eign aid.

Deontology

An ar­gu­ment of­ten used on the op­po­site side – that is, an ar­gu­ment used to op­pose abor­tion, is that abor­tion is mur­der, and mur­der is sim­ply always wrong. Whether be­cause God com­manded it or Kant de­rived it, we should place the ut­most im­por­tance of never mur­der­ing. I’m not sure that any EA prin­ci­ple di­rectly pulls against this, but nonethe­less most EAs are con­se­quen­tial­ists, who be­lieve that all val­ues can be com­pared. If abort­ing one child would save a mil­lion oth­ers, most EAs would prob­a­bly en­dorse the abor­tion. So I think this is one case where a com­mon EA view pulls in fa­vor of the per­mis­si­bil­ity of abor­tion.

I didn’t ask for this

Another ar­gu­ment of­ten used for the per­mis­si­bil­ity of abor­tion is that the situ­a­tion is in some sense un­fair. If one did not in­tend to be­come preg­nant – per­haps even took pre­cau­tions to avoid be­com­ing so – but nonethe­less ends up preg­nant, you’re in some way not re­spon­si­ble for be­com­ing preg­nant. And since you’re not re­spon­si­ble for it you have no obli­ga­tions con­cern­ing it – so may per­mis­si­ble abort the fe­tus.

How­ever, once again this runs counter to a ma­jor strand of EA thought. Most of us did not ask to be born in rich coun­tries, or to be in­tel­li­gent, or hard­work­ing. Per­haps it was sim­ply luck. Yet be­ing in such a po­si­tion nonethe­less means we have cer­tain op­por­tu­ni­ties and obli­ga­tions. Speci­fi­cally, we have the op­por­tu­nity to use of wealth to sig­nifi­cantly aid those less for­tu­nate than our­selves in the de­vel­op­ing world, and many EAs would agree the obli­ga­tion. So EAs seem to re­ject the gen­eral idea that not in­tend­ing a situ­a­tion re­lieves one of the re­spon­si­bil­ities of that situ­a­tion.

In­fan­ti­cide is okay too

A fre­quent ar­gu­ment against the per­mis­si­bil­ity of abort­ing fe­tuses is by anal­ogy to in­fan­ti­cide. In gen­eral it is hard to pro­duce a co­her­ent crite­ria that per­mits the kil­ling of ba­bies be­fore birth but for­bids it af­ter birth. For most peo­ple, this is a rea­son­ably com­pel­ling ob­jec­tion: mur­der­ing in­no­cent ba­bies is clearly evil! Yet some EAs ac­tu­ally en­dorse in­fan­ti­cide. If you were one of those peo­ple, this par­tic­u­lar ar­gu­ment would have lit­tle sway over you.

Mo­ral Universalism

A com­mon im­plicit premise in many moral dis­cus­sion is that the same moral prin­ci­ples ap­ply to ev­ery­one. When Sarah did her QALY calcu­la­tion, she counted the baby’s QALYs as equally im­por­tant to her own in the sce­nario where they counted at all. Similarly, both sides of the de­bate as­sume that what­ever the an­swer is, it will ap­ply fairly broadly. Per­haps per­mis­si­bil­ity varies by age of the fe­tus – maybe end­ing when vi­a­bil­ity hits – but the same an­swer will ap­ply to rich and poor, Chris­tian and Jew, etc.

This is some­thing some EAs might re­ject. Yes, sav­ing the baby pro­duces many more QALYs than Sarah loses through the preg­nancy, and that would be the end of the story if Sarah were sim­ply an or­di­nary per­son. But Sarah is an EA, and so has a much higher op­por­tu­nity cost for her time. Be­com­ing preg­nant will un­der­mine her ca­reer as an in­vest­ment banker, the ar­gu­ment would go, which in turn pre­vents her from donat­ing to AMF and sav­ing a great many lives. Be­cause of this, Sarah is in a spe­cial po­si­tion – it is per­mis­si­ble for her, but it would not be per­mis­si­ble for some­one who wasn’t sav­ing many lives a year.

I think this is a pretty re­pug­nant at­ti­tude in gen­eral, and a par­tic­u­larly ob­jec­tion­able in­stance of it, but I in­clude it here for com­plete­ness.

May we dis­cuss this?

Now we’ve con­sid­ered these ar­gu­ments, it ap­pears that ap­ply­ing gen­eral EA prin­ci­ples to the is­sue in gen­eral tends to make abor­tion look less morally per­mis­si­ble, though there were one or two ex­cep­tions. But there is also a sec­ond or­der is­sue that we should per­haps ad­dress – is it per­mis­si­ble to dis­cuss this is­sue at all?

Noth­ing to do with you

A fre­quently seen ar­gu­ment on this is­sue is to claim that the speaker has no right to opine on the is­sue. If it doesn’t per­son­ally af­fect you, you can­not dis­cuss it – es­pe­cially if you’re priv­ileged. As many (a ma­jor­ity?) of EAs are male, and of the women many are not preg­nant, this would cur­tail dra­mat­i­cally the abil­ity of EAs to dis­cuss abor­tion. This is not so much an ar­gu­ment on one side or other of the is­sue as an ar­gu­ment for silence.

Leav­ing aside the in­her­ent virtues and vices of this ar­gu­ment, it is not very suit­able for EAs. Be­cause EAs have many many opinions on top­ics that don’t di­rectly af­fect them:

  • EAs have opinions on dis­ease in Africa, yet most have never been to Africa, and never will

  • EAs have opinions on (non-hu­man) an­i­mal suffer­ing, yet most are not non-hu­man animals

  • EAs have opinions on the far fu­ture, yet live in the present

In­deed, EAs seem more qual­ified to com­ment on abor­tion – as we all were once fe­tuses, and many of us will be­come preg­nant. If taken se­ri­ously this ar­gu­ment would call foul on vir­tu­ally ever EA ac­tivity! And this is no idle fan­tasy – there are cer­tainly some peo­ple who think that Westerns can­not use­fully con­tribute to solv­ing Afri­can poverty.

Too controversial

We can safely say this is a some­what con­tro­ver­sial is­sue. Per­haps it is too con­tro­ver­sial – maybe it is bad for the move­ment to dis­cuss. One might ac­cept the ar­gu­ments above – that EA prin­ci­ples gen­er­ally un­der­mine the tra­di­tional rea­sons for think­ing abor­tion is morally per­mis­si­ble – yet think we should not talk about it. The con­tro­versy might di­vide the com­mu­nity and un­der­mine trust. Per­haps it might de­ter new­com­ers. I’m some­what sym­pa­thetic to this ar­gu­ment – I take the virtue of silence se­ri­ously, though even­tu­ally my boyfriend per­suaded me it was worth pub­lish­ing.

Note that the con­tro­ver­sial na­ture is ev­i­dence against abor­tion’s moral per­mis­si­bil­ity, due to moral un­cer­tainty.

How­ever, the EA move­ment is no stranger to con­tro­versy.

  • There is a semi-offi­cial EA po­si­tion on im­mi­gra­tion, which is about as con­tro­ver­sial as abor­tion in the US at the mo­ment, and the EA po­si­tion is such an ex­treme po­si­tion that es­sen­tially no main­stream poli­ti­ci­ans hold it.

  • There is a semi-offi­cial EA po­si­tion on veg­e­tar­i­anism, which is pretty con­tro­ver­sial too, as it in­volves im­ply­ing that the ma­jor­ity of Amer­i­cans are com­plicit in mur­der ev­ery day.

Not wor­thy of discussion

Fi­nally, an­other ob­jec­tion to dis­cussing this is it sim­ply it’s an EA idea. There are many dis­agree­ments in the world, yet there is no need for an EA view on each. Con­flict be­tween the Lilliputi­ans and Ble­fus­cu­d­i­ans notwith­stand­ing, there is no need for an EA per­spec­tive on which end of the egg to break first. And we should be es­pe­cially care­ful of heated, emo­tional top­ics with less av­enue to pull the rope side­ways. As such, even though the ob­ject-level ar­gu­ments given above are cor­rect, we should sim­ply de­cline to dis­cuss it.

How­ever, it seems that if abor­tion is a moral is­sue, it is a very large one. In the same way that the sheer num­ber of QALYs lost makes abor­tion worse than adop­tion even if our cre­dence in fe­tuses hav­ing moral sig­nifi­cance was very low, the large num­ber of abor­tions oc­cur­ring each year make the is­sue as a whole of high sig­nifi­cance. In 2011 there were over 1 mil­lion ba­bies were aborted in the US. I’ve seen a wide range of global es­ti­mates, in­clud­ing around 10 mil­lion to over 40 mil­lion. By con­trast, the WHO es­ti­mates there are fewer than 1 mil­lion malaria deaths wor­ld­wide each year. Abor­tion deaths also cause a higher loss of QALYs due to the young age at which they oc­cur. On the other hand, we should dis­count them for the un­cer­tainty that they are morally sig­nifi­cant. And per­haps there is an even larger closely re­lated moral is­sue. The size of the is­sue is not the only fac­tor in es­ti­mat­ing the cost-effec­tive­ness of in­ter­ven­tions, but it is the most eas­ily es­timable. On the other hand, I have lit­tle idea how many dol­lars of dona­tions it takes to save a fe­tus – it seems like an ex­cel­lent ex­am­ple of some low-hang­ing fruit re­search.

Conclusion

Peo­ple fre­quently com­part­men­tal­ize their be­liefs, and avoid ad­dress­ing the im­pli­ca­tions be­tween them. Or­di­nar­ily, this is per­haps in­nocu­ous, but when the both ideas are highly morally im­por­tant, their in­ter­ac­tion is in turn im­por­tant. In this post we the im­pli­ca­tions of com­mon EA be­liefs on the per­mis­si­bil­ity of abor­tion. Tak­ing into ac­count moral un­cer­tainty makes abort­ing a fe­tus seem far less per­mis­si­ble, as the high coun­ter­fac­tual life ex­pec­tancy of the baby tends to dom­i­nate other fac­tors. Many other EA views are also sig­nifi­cant to the is­sue, mak­ing var­i­ous stan­dard ar­gu­ments on each side less plau­si­ble.


  1. There doesn’t seem to be any neu­tral lan­guage one can use here, so I’m just go­ing to switch back and forth be­tween ‘fe­tus’ and ‘child’ or ‘baby’ in a vain at­tempt at ter­minolog­i­cal neu­tral­ity.

  2. I chose this rea­son be­cause it is the most fre­quently cited main mo­ti­va­tion for abort­ing a fe­tus ac­cord­ing to the Guttmacher In­sti­tute.