Log-normal Lamentations

[Morose. Also very roughly drafted.]

Nor­mally, things are dis­tributed nor­mally. Hu­man tal­ents may turn out to be one of these things. Some peo­ple are lucky enough to find them­selves on the right side of these dis­tri­bu­tions – smarter than av­er­age, bet­ter at school, more con­scien­tious, what­ever. To them go many spoils – prob­a­bly more so now than at any time be­fore, thanks to the in­for­ma­tion econ­omy.

There’s a com­mon story told about a hot­shot stu­dent at school whose ego crashes to earth when they go to uni­ver­sity and find them­selves among a group all as spe­cial as they thought they were. The re­al­ity might be worse: many of the groups the smart or stu­dious seg­re­gate into (physics pro­fes­sors, Har­vard un­der­grad­u­ates, doc­tors) have thresh­old (or near thresh­old)-like effects: only those with straight A’s, only those with IQs > X, etc. need ap­ply. This in­tro­duces a pos­i­tive skew to the pop­u­la­tion: most (and the me­dian) are be­low the av­er­age, brought up by a long tail of the (even more) ex­cep­tional. In­stead of com­fort­ing our­selves at look­ing at the en­tire pop­u­la­tion to which we com­pare fa­vor­ably, most of us will look around our peer group and find our­selves in the mid­dle, and hav­ing to look a long way up to the best. 1

normal

Yet part of grow­ing up is rec­og­niz­ing there will in­evitably be peo­ple bet­ter than you are – the more able may be able to buy their egos time, but no more. But that needn’t be so bad: in sev­eral fields (such as medicine) it can be gen­uinely hard to judge ‘bet­ter­ness’, and so harder to find ex­em­plars to illu­mi­nate your rel­a­tive medi­ocrity. Often there are a va­ri­ety of di­men­sions to be­ing ‘bet­ter’ at some­thing: al­though I don’t need to try too hard to find doc­tors who are bet­ter at some as­pect of medicine than I (more knowl­edge­able, kinder, more skil­led in com­mu­ni­ca­tion etc.) it is mer­cifully rare to find doc­tors who are bet­ter than me in all re­spects. And of­ten the tails are thin: if you’re around 1 stan­dard de­vi­a­tion above the mean, peo­ple many times fur­ther from the av­er­age than you are will still be ex­traor­di­nar­ily rare, even if you had a good stick to com­pare them to your­self.

Look at our thick-tailed works, ye av­er­age, and de­spair! 2

One nice thing about the EA com­mu­nity is that they tend to be an ex­cep­tion­ally able bunch: I re­mem­ber be­ing in an ‘in­tern house’ that housed the guy who came top in philos­o­phy at Cam­bridge, the guy who came top in philos­o­phy at Yale, and the guy who came top in philos­o­phy at Prince­ton – and al­though that isn’t a stan­dard sam­ple, we seem to be drawn dis­pro­por­tionately not only from those who went to elite uni­ver­si­ties, but those who did ex­tremely well at elite uni­ver­si­ties. 3 This sets the bar very high.

Many of the ‘high im­pact’ ac­tivi­ties these high achiev­ing peo­ple go into (or as­pire to go into) are more ex­treme than nor­mal(ly dis­tributed): log-nor­mal com­monly, but it may of­ten be Pareto. The dis­tri­bu­tion of in­come or out­comes from en­trepreneurial ven­tures (and there­fore up­per-bounds on what can be ‘earned to give’), the dis­tri­bu­tion of pa­pers or cita­tions in academia, the im­pact of di­rect pro­jects, and (more ten­u­ously) de­gree of con­nec­tivity or im­por­tance in so­cial net­works or move­ments would all be ex­am­ples: a few su­per­stars and ‘big win­ners’, but or­ders of mag­ni­tude smaller re­turns for the rest.

In­so­far as I have ‘EA ca­reer path’, mine is earn­ing to give: if I were try­ing to feel good about the good I was do­ing, my first port of call would be my dona­tions. In sum, I’ve given quite a lot to char­ity – ~£15,000 and count­ing – which I’m proud of. Yet I’m no banker (or algo-trader) – those who are re­ally good (or lucky, or both) can end up out of uni­ver­sity with higher start­ing salaries than my peak ex­pected salary, and so can give away more than ten times more than I will be able to. I know sev­eral of these peo­ple, and the run­ning tally of each of their dona­tions is of­ten around ten times my own. If they or oth­ers be­come even more suc­cess­ful in fi­nance, or very rich start­ing a com­pany, there might be sev­eral more or­ders of mag­ni­tude be­tween their giv­ing and mine. My con­tri­bu­tions may be lit­tle more than a round­ing er­ror to their work.

A shat­tered visage

Earn­ing to give is kinder to the rel­a­tively minor play­ers than other ‘fields’ of EA ac­tivity, as even though Bob’s or Ellie’s dona­tions are far larger, they do not overde­ter­mine my own: that their dona­tions de­wormed 1000x chil­dren does not make the 1x I de­wormed any less valuable. It is un­clear whether this ap­plies to other ‘fields’: Sup­pose I be­came a re­searcher work­ing on a malaria vac­cine, but this vac­cine is dis­cov­ered by Sally the su­per sci­en­tist and her re­search group across the world. Sup­pose also that Sally’s dis­cov­ery was in­de­pen­dent of my own work. Although it might have been ex ante ex­tremely valuable for me to work on malaria, its value is vi­ti­ated when Sally makes her break­through, in the same way a lot­tery ticket loses value af­ter the draw.

So there are a few ways an Effec­tive Altru­ist mind­set can de­press our egos:

  1. It is gen­er­ally a very able and high achiev­ing group of peo­ple, set­ting the ‘av­er­age’ pretty high.

  2. ‘Effec­tive Altru­ist’ fields tend to be heavy-tailed, so that be­ing merely ‘av­er­age’ (for EAs!) in some­thing like earn­ing to give mean hav­ing a much smaller im­pact when com­pared to one of the (rel­a­tively com­mon) su­per­stars.

  3. (Our keen­ness for quan­tifi­ca­tion makes us par­tic­u­larly in­clined to­wards and able to make these sorts of com­par­a­tive judge­ments, ditto the pen­chant for tak­ing things to be com­men­su­rate).

  4. Many of these fields have ‘lot­tery-like’ char­ac­ter­is­tics where ex ante and ex post value di­verge greatly. ‘Tak­ing a shot’ at be­ing an aca­demic or en­trepreneur or poli­ti­cian or lead­ing jour­nal­ist may be a good bet ex ante for an EA be­cause the up­side is so high even if their chances of suc­cess re­main low (albeit bet­ter than the stan­dard refer­ence class). But if the me­dian out­come is failure, the ma­jor­ity who will fail might find the fact it was a good idea ex ante of scant con­so­la­tion – re­wards (and most of the world gen­er­ally) run ex post facto.

What re­mains besides

I haven’t found a ready ‘solu­tion’ for these prob­lems, and I’d guess there isn’t one to be found. We should be scep­ti­cal of ide­olog­i­cal panaceas that can do no wrong and ev­ery­thing right, and EA is no ex­cep­tion: we should ex­pect it to have some costs, and per­haps this is one of them. If so, bet­ter to ac­cept it rather than defend the im­plau­si­bly defen­si­ble.

In the same way I could con­sole my­self, on con­fronting a gen­er­ally bet­ter doc­tor: “Sure, they are bet­ter at A, and B, and C, … and Y, but I’m bet­ter at Z!”, one could do the same with re­gards to the axes one’s ‘EA work’. “Sure, Ellie the en­trepreneur has given hun­dreds of times more money to char­ity, but what’s she like at self-flag­el­lat­ing blog posts, huh?” There’s an in­cen­tive to di­ver­sify as (com­bi­na­tor­i­cally) it will be less fre­quent to find some­one who strictly dom­i­nates you, and al­though we want to com­pare across di­verse fields, do­ing so re­mains difficult. Pablo Staffor­ini has men­tioned el­se­where whether EAs should be ‘spe­cial­is­ing’ more in­stead of spread­ing their en­er­gies over dis­parate fields: per­haps this makes that less sur­pris­ing. 4

In­so­far as peo­ple’s self-es­teem is tied up with their work as EAs (and, hey, shouldn’t it be, in part?) There per­haps is a bal­ance to be struck be­tween soberly and frankly dis­cussing the out­comes and mer­its of our ac­tions, and be­ing gen­tle to avoid hurt­ing our peers by talk­ing down their work. Yes, we would all want to know if what we were do­ing was near use­less (or even net nega­tive), but this should be bro­ken with care. 5

‘Suck it up’ may be the best strat­egy. Th­ese prob­lems be­come more acute the more we care about our ‘sta­tus’ in the EA com­mu­nity; the plea­sure we de­rive from not only do­ing good, but do­ing more good than our peers; and our de­sire to be seen as suc­cess­ful. Good though it is for these de­sires to be sub­li­mated to bet­ter ends (far prefer­able all else equal that ri­vals choose char­i­ta­ble dona­tions rather than Ve­blen goods to be the arena of their com­pe­ti­tion), it would be even bet­ter to guard against these de­sires in the first place. Pri­mar­ily, worry about how to do the most good. 6

Notes:

  1. As fur­ther bad news, there may be pro­gres­sion of ‘tiers’ which are pro­gres­sively more se­lec­tive, some­what akin to stacked band-pass filters: even if you were the best maths stu­dent at your school, then the best at uni­ver­sity, you may still find your­self plonked around me­dian in a pos­i­tive-skewed pop­u­la­tion of maths pro­fes­sors – and if you were an ex­cep­tional maths pro­fes­sor, you might find your­self plonked around me­dian in the pop­u­la­tion of fields medal­ists. And so on (es­pe­cially – see in­fra – if the un­der­ly­ing dis­tri­bu­tion is some­thing scale-free).

  2. I won­der how much this post is a mon­u­ment to the grasp­ing vain­glo­ri­ous­ness of my char­ac­ter…

  3. Pace: aca­demic perfor­mance is not the only (nor the best) mea­sure of abil­ity. But it is a mea­sure, and a fairly ger­mane one for the fairly young pop­u­la­tion ‘in’ EA.

  4. Although there are other more be­nign pos­si­bil­ities, given diminish­ing marginal re­turns and the lack of peo­ple available. As a fur­ther aside, I’m wary of ar­gu­ments/​dis­cus­sions that note bias or self-serv­ing ex­pla­na­tions that lie par­allel to an op­pos­ing point of view (“We should ex­pect peo­ple to be more op­posed to my con­tro­ver­sial idea than they should be due to sta­tus quo and so­cial de­sir­a­bil­ity bi­ases”, etc.) First be­cause there are gen­er­ally so many can­di­date bi­ases available they end up point­ing in most di­rec­tions; sec­ond be­cause it is un­clear whether know­ing about or not­ing bi­ases makes one less bi­ased; and third be­cause gen­er­ally more progress can be made on ob­ject level dis­agree­ment than on try­ing to eval­u­ate the strength and rele­vance of par­tic­u­lar bi­ases.

  5. Another thing I am wary of is Crocker’s rules: the idea that you unilat­er­ally de­clare: ‘don’t worry about be­ing po­lite with me, just tell it to me straight! I won’t be offended’. Nat­u­rally, one should try and sep­a­rate one’s sense of offense from what­ever in­for­ma­tion was there – it would be a shame to re­ject a cor­rect di­ag­no­sis of our prob­lems be­cause of how it was said. Yet that is very differ­ent from try­ing to es­chew this ‘so­cial for­mat­ting’ al­to­gether: peo­ple (my­self in­cluded) gen­er­ally find it eas­ier to re­spond well when peo­ple are po­lite, and I sus­pect this even ap­plies to those ea­ger to make Crocker’s Rules-es­que dec­la­ra­tions. We might (es­pe­cially if we’re in­volved in the ‘ra­tio­nal­ity’ move­ment) want to over­come petty ir­ra­tional­ities like in­cor­rectly up­dat­ing on feed­back be­cause of an af­front to our sta­tus or self es­teem. Yet al­though petty, they are sur­pris­ingly difficult to budge (if I cloned you 1000 times and ‘told it straight’ to half, yet made an effort to be po­lite with the other half, do you think one group would up­date bet­ter?) and part of ac­knowl­edg­ing our bi­ases should be an ac­knowl­edge­ment that it is some­times bet­ter to pla­cate them rather than over­come them.

  6. cf. Max Ehrmann put it well:

    … If you com­pare your­self with oth­ers, you may be­come vain or bit­ter, for always there will be greater and lesser per­sons than your­self.

    En­joy your achieve­ments as well as your plans. Keep in­ter­ested in your own ca­reer, how­ever hum­ble…