Epistemic Tenure

In this post, I will try to justify the following claim (which I am not sure how much I believe myself):

Let Bob be an individual that I have a lot intellectual respect for. For example, maybe Bob had a history of believing true things long before anyone else, or Bob has a discovered or invented some ideas that I have found very useful. Now, let’s say that Bob expresses a new belief that feels to me to be obviously wrong. Bob has tried to explain his reasons for the belief, and they seem to also be obviously wrong. I think I can see what mistake Bob is making, and why he is making it. I claim that I should continue to take Bob very seriously, try to engage with Bob’s new belief, and give Bob a decent portion of my attention. I further claim, that many people should do this, and do it publicly.

There is an obvious reason why it is good to take Bob’s belief seriously. Bob has proven to me that he is smart. The fact that Bob believes a thing is strong evidence that that thing is true. Further, before Bob said this new thing, I would not have trusted his epistemics much less than I trust my own. I don’t have a strong reason to believe that I am not the one who is obviously wrong. The situation is symmetric. Outside view says that Bob might be right.

This is not the reason I want to argue for. I think this is partially right, but there is another reason I think people are more likely to miss, that I think pushes it a lot further.

Before Bob had his new bad idea, Bob was in a position of having intellectual respect. An effect of this was that he could say things, and people would listen. Bob probably values this fact. He might value it because he terminally values the status. But he also might value it because the fact that people will listen to his ideas is instrumentally useful. For example, if people are willing to listen to him and he has opinions on what sorts of things people should be working on, he could use his epistemic status to steer the field towards directions that he thinks will be useful.

When Bob has a new bad idea, he might not want to share it if he thinks it would cause him to lose his epistemic status. He may prefer to save his epistemic status up to spend later. This itself would not be very bad. What I am worried about is if Bob ends up not having the new bad idea in the first place. It is hard to have one set of beliefs, and simultaneously speak from another one. The external pressures that I place on Bob to continue to say new interesting things that I agree with may back propagate all the way into Bob’s ability to generate new beliefs.

This is my true concern. I want Bob to be able to think free of the external pressures coming from the fact that others are judging his beliefs. I still want to be able to partially judge his beliefs, and move forward even when Bob is wrong. I think there is a real tradeoff here. The group epistemics are made better by directing attention away from bad beliefs, but the individual epistemics are made better by optimizing for truth, rather than what everyone else thinks. Because of this, I can’t give out (my own personal) epistemic tenure too freely. Attention is a conserved resource, and attention that I give to Bob is being taken away from attention that could be directed toward GOOD ideas. Because of this tradeoff, I am really not sure how much I believe my original claim, but I think it is partially true.

I am really trying to emphasize the situation where even my outside view says that Bob is wrong. I think this points out that it is not about how Bob’s idea might be good. It is about how Bob’s idea might HAVE BEEN good, and the fact that he would not lose too much epistemic status is what enabled him to make the more high-variance cognitive moves that might lead to good ideas. This is why it is important to make this public. It is about whether Bob, and other people like Bob, can trust that they will not be epistemically ostracized.

Note that a community could have other norms that are not equivalent to epistemic tenure, but partially replace the need for it, and make it not worth it because of the tradeoffs. One such mechanism (with its own tradeoffs) is not assigning that much epistemic status at all, and trying to ignore who is making the arguments. If I were convinced that epistemic tenure was a bad idea for LW or AI safety, it would probably be because I believed that existing mechanisms are already doing enough of it.

Also, maybe it is a good idea to do this implicitly, but a bad idea to do it explicitly. I don’t really know what I believe about any of this. I am mostly just trying to point out that a tradeoff exists, that the costs of having to take approval of the group epistemics into account when forming your own beliefs might be both invisible and large, and that there could be some structural ways to fight against those costs.