Epistemic Tenure

In this post, I will try to jus­tify the fol­low­ing claim (which I am not sure how much I be­lieve my­self):

Let Bob be an in­di­vi­d­ual that I have a lot in­tel­lec­tual re­spect for. For ex­am­ple, maybe Bob had a his­tory of be­liev­ing true things long be­fore any­one else, or Bob has a dis­cov­ered or in­vented some ideas that I have found very use­ful. Now, let’s say that Bob ex­presses a new be­lief that feels to me to be ob­vi­ously wrong. Bob has tried to ex­plain his rea­sons for the be­lief, and they seem to also be ob­vi­ously wrong. I think I can see what mis­take Bob is mak­ing, and why he is mak­ing it. I claim that I should con­tinue to take Bob very se­ri­ously, try to en­gage with Bob’s new be­lief, and give Bob a de­cent por­tion of my at­ten­tion. I fur­ther claim, that many peo­ple should do this, and do it pub­li­cly.

There is an ob­vi­ous rea­son why it is good to take Bob’s be­lief se­ri­ously. Bob has proven to me that he is smart. The fact that Bob be­lieves a thing is strong ev­i­dence that that thing is true. Fur­ther, be­fore Bob said this new thing, I would not have trusted his epistemics much less than I trust my own. I don’t have a strong rea­son to be­lieve that I am not the one who is ob­vi­ously wrong. The situ­a­tion is sym­met­ric. Out­side view says that Bob might be right.

This is not the rea­son I want to ar­gue for. I think this is par­tially right, but there is an­other rea­son I think peo­ple are more likely to miss, that I think pushes it a lot fur­ther.

Be­fore Bob had his new bad idea, Bob was in a po­si­tion of hav­ing in­tel­lec­tual re­spect. An effect of this was that he could say things, and peo­ple would listen. Bob prob­a­bly val­ues this fact. He might value it be­cause he ter­mi­nally val­ues the sta­tus. But he also might value it be­cause the fact that peo­ple will listen to his ideas is in­stru­men­tally use­ful. For ex­am­ple, if peo­ple are will­ing to listen to him and he has opinions on what sorts of things peo­ple should be work­ing on, he could use his epistemic sta­tus to steer the field to­wards di­rec­tions that he thinks will be use­ful.

When Bob has a new bad idea, he might not want to share it if he thinks it would cause him to lose his epistemic sta­tus. He may pre­fer to save his epistemic sta­tus up to spend later. This it­self would not be very bad. What I am wor­ried about is if Bob ends up not hav­ing the new bad idea in the first place. It is hard to have one set of be­liefs, and si­mul­ta­neously speak from an­other one. The ex­ter­nal pres­sures that I place on Bob to con­tinue to say new in­ter­est­ing things that I agree with may back prop­a­gate all the way into Bob’s abil­ity to gen­er­ate new be­liefs.

This is my true con­cern. I want Bob to be able to think free of the ex­ter­nal pres­sures com­ing from the fact that oth­ers are judg­ing his be­liefs. I still want to be able to par­tially judge his be­liefs, and move for­ward even when Bob is wrong. I think there is a real trade­off here. The group epistemics are made bet­ter by di­rect­ing at­ten­tion away from bad be­liefs, but the in­di­vi­d­ual epistemics are made bet­ter by op­ti­miz­ing for truth, rather than what ev­ery­one else thinks. Be­cause of this, I can’t give out (my own per­sonal) epistemic tenure too freely. At­ten­tion is a con­served re­source, and at­ten­tion that I give to Bob is be­ing taken away from at­ten­tion that could be di­rected to­ward GOOD ideas. Be­cause of this trade­off, I am re­ally not sure how much I be­lieve my origi­nal claim, but I think it is par­tially true.

I am re­ally try­ing to em­pha­size the situ­a­tion where even my out­side view says that Bob is wrong. I think this points out that it is not about how Bob’s idea might be good. It is about how Bob’s idea might HAVE BEEN good, and the fact that he would not lose too much epistemic sta­tus is what en­abled him to make the more high-var­i­ance cog­ni­tive moves that might lead to good ideas. This is why it is im­por­tant to make this pub­lic. It is about whether Bob, and other peo­ple like Bob, can trust that they will not be epistem­i­cally os­tra­cized.

Note that a com­mu­nity could have other norms that are not equiv­a­lent to epistemic tenure, but par­tially re­place the need for it, and make it not worth it be­cause of the trade­offs. One such mechanism (with its own trade­offs) is not as­sign­ing that much epistemic sta­tus at all, and try­ing to ig­nore who is mak­ing the ar­gu­ments. If I were con­vinced that epistemic tenure was a bad idea for LW or AI safety, it would prob­a­bly be be­cause I be­lieved that ex­ist­ing mechanisms are already do­ing enough of it.

Also, maybe it is a good idea to do this im­plic­itly, but a bad idea to do it ex­plic­itly. I don’t re­ally know what I be­lieve about any of this. I am mostly just try­ing to point out that a trade­off ex­ists, that the costs of hav­ing to take ap­proval of the group epistemics into ac­count when form­ing your own be­liefs might be both in­visi­ble and large, and that there could be some struc­tural ways to fight against those costs.