Extended Quote on the Institution of Academia

From the top-notch 80,000 Hours pod­cast, and their re­cent in­ter­view with Holden Karnofsky (Ex­ec­u­tive Direc­tor of the Open Philan­thropy Pro­ject).

What fol­lows is an short anal­y­sis of what academia does and doesn’t do, fol­lowed by a few dis­cus­sion points by me at the end. I re­ally like this frame, I’ll likely use it in con­ver­sa­tion in the fu­ture.


Robert Wiblin: What things do you think you’ve learned, over the last 11 years of do­ing this kind of re­search, about in what situ­a­tions you can trust ex­pert con­sen­sus and in what cases you should think there’s a sub­stan­tial chance that it’s quite mis­taken?

Holden Karnofsky: Sure. I mean I think it’s hard to gen­er­al­ize about this. Some­times I wish I would write down my model more ex­plic­itly. I thought it was cool that Eliezer Yud­kowsky did that in his book, Inad­e­quate Equil­ibria. I think one thing that I es­pe­cially look for, in terms of when we’re do­ing philan­thropy, is I’m es­pe­cially in­ter­ested in the role of academia and what academia is able to do. You could look at cor­po­ra­tions, you can un­der­stand their in­cen­tives. You can look at Govern­ments, you can sort of un­der­stand their in­cen­tives. You can look at think-tanks, and a lot of them are just like … They’re aimed di­rectly at Govern­ments, in a sense. You can sort of un­der­stand what’s go­ing on there.

Academia is the de­fault home for peo­ple who re­ally spend all their time think­ing about things that are in­tel­lec­tual, that could be im­por­tant to the world, but that there’s no client who is like, “I need this now for this rea­son. I’m mak­ing you do it.” A lot of the times, when some­one says, “Some­one should, let’s say, work on AI al­ign­ment or work on AI strat­egy or, for ex­am­ple, eval­u­ate the ev­i­dence base for bed nets and de­worm­ing, which is what GiveWell does … ” A lot of the time, my first ques­tion, when it’s not ob­vi­ous where else it fits, is would this fit into academia?

This is some­thing where my opinions and my views have evolved a lot, where I used to have this very sim­plified, “Academia. That’s like this gi­ant set of uni­ver­si­ties. There’s a whole ton of very smart in­tel­lec­tu­als who knows they can do ev­ery­thing. There’s a zillion fields. There’s a liter­a­ture on ev­ery­thing, as has been writ­ten on Marginal Revolu­tion, all that sort of thing.” I re­ally never know when to ex­pect that some­thing was go­ing to be ne­glected and when it wasn’t, and it takes a gi­ant liter­a­ture re­view to figure out which is which.

I would say I’ve definitely evolved on that. I, to­day, when I think about what academia does, I think it is re­ally set up to push the fron­tier of knowl­edge, the vast ma­jor­ity, and I think es­pe­cially in the harder sci­ences. I would say the vast ma­jor­ity of what is go­ing on in aca­demic is peo­ple are try­ing to do some­thing novel, in­ter­est­ing, clever, cre­ative, differ­ent, new, provoca­tive, that re­ally pushes the bound­aries of knowl­edge for­ward in a new way. I think that’s re­ally im­por­tant ob­vi­ously and great thing. I’m re­ally, in­cred­ibly glad we have in­sti­tu­tions to do it.

I think there are a whole bunch of other ac­tivi­ties that are in­tel­lec­tual, that are challeng­ing, that take a lot of in­tel­lec­tual work and that are in­cred­ibly im­por­tant and that are not that. They have nowhere else to live. No one else can do them. I’m es­pe­cially in­ter­ested, and my eyes es­pe­cially light up, when I see an op­por­tu­nity to … There’s an in­tel­lec­tual topic, it’s re­ally im­por­tant to the world but it’s not ad­vanc­ing the fron­tier of knowl­edge. It’s more figur­ing out some­thing in a prag­matic way that is go­ing to in­form what de­ci­sion mak­ers should do, and also there’s no one de­ci­sion maker ask­ing for it as would be the case with Govern­ment or cor­po­ra­tions.

To give ex­am­ples of this, I mean I think GiveWell is the first place where I might have ini­tially ex­pected that there was go­ing to be de­vel­op­ment eco­nomics was go­ing to tell us what the best char­i­ties are. Or, at least, tell us what the best in­ter­ven­tions are. Tell us is bed nets, de­worm­ing, cash trans­fers, agri­cul­tural ex­ten­sion pro­grams, ed­u­ca­tion im­prove­ment pro­grams, which ones are helping the most peo­ple for the least money. There’s re­ally very lit­tle work on this in academia.

A lot of times, there will be one study that tries to es­ti­mate the im­pact of de­worm­ing, but very few or no at­tempts to re­ally repli­cate it. It’s much more valuable to aca­demics to have a new in­sight, to show some­thing new about the world then to try and nail some­thing down. It re­ally got brought home to me re­cently when we were do­ing our Crim­i­nal Jus­tice Re­form work and we wanted to check our­selves. We wanted to check this ba­sic as­sump­tion that it would be good to have less in­car­cer­a­tion in the US.

David Rood­man, who is ba­si­cally the per­son that I con­sider the gold stan­dard of a crit­i­cal ev­i­dence re­viewer, some­one who can re­ally dig on a com­pli­cated liter­a­ture and come up with the an­swers, he did what, I think, was a re­ally won­der­ful and re­ally fas­ci­nat­ing pa­per, which is up on our web­site, where he looked for all the stud­ies on the re­la­tion­ship be­tween in­car­cer­a­tion and crime, and what hap­pens if you cut in­car­cer­a­tion, do you ex­pect crime to rise, to fall, to stay the same? He re­ally picked them apart. What hap­pened is he found a lot of the best, most pres­ti­gious stud­ies and about half of them, he found fatal flaws in when he just tried to repli­cate them or redo their con­clu­sions.

When he put it all to­gether, he ended up with a differ­ent con­clu­sion from what you would get if you just read the ab­stracts. It was a com­pletely novel piece of work that re­viewed this whole ev­i­dence base at a level of thor­ough­ness that had never been done be­fore, came out with a con­clu­sion that was differ­ent from what you naively would have thought, which con­cluded his best es­ti­mate is that, at cur­rent mar­gins, we could cut in­car­cer­a­tion and there would be no ex­pected im­pact on crime. He did all that. Then, he started sub­mit­ting it to jour­nals. It’s got­ten re­jected from a large num­ber of jour­nals by now [laugh­ter]. I mean start­ing with the most pres­ti­gious ones and then go­ing to the less.

Robert Wiblin: Why is that?

Holden Karnofsky: Be­cause his pa­per, it’s re­ally, I think, it’s in­cred­ibly well done. It’s in­cred­ibly im­por­tant, but there’s noth­ing in some sense, in some kind of aca­demic taste sense, there’s noth­ing new in there. He took a bunch of stud­ies. He re­did them. He found that they broke. He found new is­sues with them, and he found new con­clu­sions. From a policy maker or philan­thropist per­spec­tive, all very in­ter­est­ing stuff, but did we re­ally find a new method for as­sert­ing causal­ity? Did we re­ally find a new in­sight about how the mind of a per­pe­tra­tor works? No. We didn’t ad­vance the fron­tiers of knowl­edge. We pul­led to­gether a bunch of knowl­edge that we already had, and we syn­the­sized it. I think that’s a com­mon theme is that, I think, our aca­demic in­sti­tu­tions were set up a while ago, and they were set up at a time when it seemed like the most valuable thing to do was just to search for the next big in­sight.

Th­ese days, they’ve been around for a while. We’ve got a lot of in­sights. We’ve got a lot of in­sights sit­ting around. We’ve got a lot of stud­ies. I think a lot of the times what we need to do is take the in­for­ma­tion that’s already available, take the stud­ies that already ex­ist, and syn­the­size them crit­i­cally and say, “What does this mean for what we should do? Where we should give money, what policy should be.”

I don’t think there’s any home in academia to do that. I think that cre­ates a lot of the gaps. This also ap­plies to AI timelines where it’s like there’s noth­ing par­tic­u­larly in­no­va­tive, ground­break­ing, knowl­edge fron­tier ad­vanc­ing, cre­ative, clever about just… It’s a ques­tion that mat­ters. When can we ex­pect trans­for­ma­tive AI and with what prob­a­bil­ity? It mat­ters, but it’s not a work of fron­tier ad­vanc­ing in­tel­lec­tual cre­ativity to try to an­swer it.

A very com­mon theme in a lot of the work we ad­vance is in­stead of push­ing the fron­tiers of knowl­edge, take knowl­edge that’s already out there. Pull it to­gether, cri­tique it, syn­the­size it and de­cide what that means for what we should do. Espe­cially, I think, there’s also very lit­tle in the way of in­sti­tu­tions that are try­ing to an­ti­ci­pate big in­tel­lec­tual break­throughs down the road, such as AI, such as other tech­nolo­gies that could change the world. Think about how they could make the world bet­ter or worse, and what we can do to pre­pare for them.

I think his­tor­i­cally when academia was set up, we were in a world where it was re­ally hard to pre­dict what the next sci­en­tific break­through was go­ing to be. It was re­ally hard to pre­dict how it would af­fect the world, but it usu­ally turned out pretty well. I think for var­i­ous rea­sons, the sci­en­tific land­scape maybe chang­ing now where it’s … I think, in some ways, there are ar­gu­ments it’s get­ting eas­ier to see where things are headed. We know more about sci­ence. We know more about the ground rules. We know more about what can­not be done. We know more about what prob­a­bly, even­tu­ally can be done.

I think it’s some­what of a happy co­in­ci­dence so far that most break­throughs have been good. To say, I see a break­through on the hori­zon. Is that good or bad? How can we pre­pare for it? That’s an­other thing academia is re­ally not set up to do. Academia is set up to get the break­through. That is a ques­tion I ask my­self a lot is here’s an in­tel­lec­tual ac­tivity. Why can’t it be done in academia? Th­ese days, my an­swer is if it’s re­ally pri­mar­ily of in­ter­est to a very cos­mopoli­tan philan­thropist try­ing to help the whole fu­ture, and there’s no one client and it’s not fron­tier ad­vanc­ing, then I think that does make it pretty plau­si­ble to me that there’s no one do­ing it. We would love to change that, at least some­what, by fund­ing what we think is the most im­por­tant work.

Robert Wiblin: Some­thing that doesn’t quite fit with that is that you do see a lot of prac­ti­cal psy­chol­ogy and nu­tri­tion pa­pers that are try­ing to an­swer ques­tions that the pub­lic have. Usu­ally done very poorly, and you can’t re­ally trust the an­swers. But, it’s things like, you know, “Does choco­late pre­vent can­cer?” Or, some non­sense … a small sam­ple pa­per like that. That seems like it’s not push­ing for­ward method­ol­ogy, it’s just do­ing an ap­pli­ca­tion. How does that fit into to this model?

Holden Karnofsky: Well, I mean, first up, it’s a gen­er­al­iza­tion. So, I’m not gonna say it’s ev­ery­thing. But, I will also say, that stuff is very low pres­tige.

And, I think it tends … so first off, I mean, A: that work, it’s not the hot thing to work on, and for that rea­son, I think, cor­re­lated with that you see a lot of work that isn’t … it’s not very well funded, it’s not very well ex­e­cuted, it’s not very well done, it doesn’t tell you very much. The vast ma­jor­ity of nu­tri­tion stud­ies out there are just … you know, you can look at even a sam­ple re­port we did on carbs and obe­sity that Luke Muehlhauser did, it just … these stud­ies are just … if some­one had gone af­ter them a lit­tle harder with the en­ergy and the fund­ing that we go af­ter some of the fun­da­men­tal stuff, they could have been a lot more in­for­ma­tive.

And then, the other thing is, that I think you will see even less of, is good crit­i­cal ev­i­dence re­views. So, you’ll see a study … so, you’re right, you’ll see a study that’s, you know, “Does choco­late more dis­ease?” Or what­ever, and some­times that study will use es­tab­lished meth­ods, and it’s just an­other data-point. But, the part about tak­ing what’s out there and syn­the­siz­ing it all, and say­ing, “There’s a thou­sand stud­ies, here are the ones that are worth look­ing at. Here are their strengths, here are their weak­nesses.”

There are liter­a­ture re­views, but I don’t think they’re a very pres­ti­gious thing to do, and I don’t they’re done su­per great. And so, I think, for ex­am­ple, some of the stuff GiveWell does, it’s like they have to rein­vent a lot of this stuff, and they have to do a lot of the crit­i­cal ev­i­dence re­views ’cause they’re not already out there.


The most in­ter­est­ing parts of this to me were:

  • Since read­ing Inad­e­quate Equil­ibria, I’ve mostly thought of sci­ence through the lens of co­or­di­na­tion failures; how­ever this new fram­ing is markedly more pos­i­tive, which in­stead of talk­ing about failures talks about the suc­cesses (Old: “Academia is the thing that fails to do X” vs New: “Academia is the thing that is good at Y, but only Y”). As well as helping me model academia more fruit­fully, I hon­estly sus­pect that this fram­ing will be more palat­able to peo­ple I pre­sent it to.

    • To state it in my own words: this model of sci­ence says the in­sti­tu­tion is good—not at all kinds of in­tel­lec­tual work, but speci­fi­cally the sub­set that is ‘dis­cov­er­ing new ideas’. This is to be con­trasted with syn­the­sis of old ideas into policy recom­men­da­tions, or repli­ca­tion of pub­lished work (for any prac­ti­cal pur­pose).

    • For ex­am­ple within sci­ence it is use­ful to have more data about which as­sump­tions are ac­tu­ally true in a given model, yet I imag­ine that in this frame, no in­di­vi­d­ual re­searcher is in­cen­tivised to do any­thing but pub­lish the next new idea, and so no­body does the repli­ca­tions ei­ther. (I know, pre­dict­ing a repli­ca­tion crisis is very novel of me.)

  • This equil­ibria model sug­gests to me that we’re liv­ing in a world where the in­di­vi­d­ual who can pick up the most value is not the per­son com­ing up with new ideas, but the per­son who can best turn cur­rent knowl­edge into policy recom­men­da­tions.

    • That is, the 80th per­centile per­son at dis­cov­er­ing new ideas will not cre­ate as much value as the 50th per­centile per­son at syn­the­sis­ing and un­der­stand­ing a broad swathe of pre­sent ideas.

      • My favourite ex­am­ple of such a work is Scott’s Mar­ijuana: Much More Than You Wanted to Know, which finds that the term that should cap­ture most of the var­i­ance in your model (of the effects of le­gal­i­sa­tion) is how much mar­ijuana af­fects driv­ing abil­ity.

    • Also in this model of sci­ence, we should dis­t­in­guish ‘value’ from ‘com­pet­i­tive­ness within academia’, which is in fact the very thing you would be trad­ing away in or­der to do this work.


Some ques­tions for the com­ments:

  • What is the main thing that this model doesn’t ac­count for /​ over counts? That is, what is the big thing this model for­gets that sci­ence can’t do; al­ter­na­tively, what is the big thing that this model says sci­ence can do, that it can’t?

  • Is the fram­ing about the main place an in­tel­lec­tual can have out­sized im­pact cor­rect? That is, is the marginal re­searcher who does syn­the­sis of ex­ist­ing knowl­edge in fact the most valuable, or is it some other kind of re­searcher?