This may be anchoring the concept too firmly in the community, but I think there is another benefit to giving obviously-wrong ideas from epistemically-sound people attention: it shows how a given level of epistemic mastery is incomplete.
I feel like given an epistemology vocabulary it would be very easy to say that since Bob has said something wrong and his reasons are wrong we should lower our opinion of Bob’s epistemic mastery overall. I also feel like that would be both wrong and useless, because it is not as though rationality consisted of some singular epistemesis score that can be raised or lowered. Instead there is a (incomplete!) battery of skills that go in to rationality and we’d be forsaking an opportunity to advance the art by spending time looking at the how and why of the wrongness.
I think the benefit of epistemic tenure is in high value oops! generation.
it is not as though rationality consisted of some singular epistemesis score that can be raised or lowered
I feel like this is fighting the hypothesis. As Garrabrant says:
Attention is a conserved resource, and attention that I give to Bob is being taken away from attention that could be directed toward GOOD ideas.
It doesn’t matter whether or not you think it is possible to track rationality through some singular epistemesis score. The question is: you have limited attentional resources and the problem OP outlined; “rationality” is probably complicated; what do you do anyway?
How you divvy them is the score. Or, to replace the symbol with the substance: if you’re in charge of divvying those resources, then your particular algorithm will decide what your underlings consider status/currency, and can backpropagate into their minds.
The thing I am trying to point at here is that attention to Bob’s bad ideas is also necessarily attention to the good ideas Bob uses in idea generation. Therefore I think the total cost in wasted attention is much lower, which speaks to why we should be less concerned about evaluating them and to why Bob should not sweat his status.
I would go further and say it is strange to me that an idea being obviously wrong from a reliable source should be more likely to be dismissed than one that is subtly wrong. Bob is smart and usually correct—further attention to a mostly-correct idea of his is unlikely to improve it. By contrast I think an obviously wrong idea is a big red flag that something has obviously gone wrong.
I may be missing something obvious, but I’m having a hard time imagining how to distinguish in practice between a policy against providing attention to bad ideas, and a policy against providing attention to idea-generating ideas. This seems self-defeating.
This may be anchoring the concept too firmly in the community, but I think there is another benefit to giving obviously-wrong ideas from epistemically-sound people attention: it shows how a given level of epistemic mastery is incomplete.
I feel like given an epistemology vocabulary it would be very easy to say that since Bob has said something wrong and his reasons are wrong we should lower our opinion of Bob’s epistemic mastery overall. I also feel like that would be both wrong and useless, because it is not as though rationality consisted of some singular epistemesis score that can be raised or lowered. Instead there is a (incomplete!) battery of skills that go in to rationality and we’d be forsaking an opportunity to advance the art by spending time looking at the how and why of the wrongness.
I think the benefit of epistemic tenure is in high value oops! generation.
I feel like this is fighting the hypothesis. As Garrabrant says:
It doesn’t matter whether or not you think it is possible to track rationality through some singular epistemesis score. The question is: you have limited attentional resources and the problem OP outlined; “rationality” is probably complicated; what do you do anyway?
How you divvy them is the score. Or, to replace the symbol with the substance: if you’re in charge of divvying those resources, then your particular algorithm will decide what your underlings consider status/currency, and can backpropagate into their minds.
The thing I am trying to point at here is that attention to Bob’s bad ideas is also necessarily attention to the good ideas Bob uses in idea generation. Therefore I think the total cost in wasted attention is much lower, which speaks to why we should be less concerned about evaluating them and to why Bob should not sweat his status.
I would go further and say it is strange to me that an idea being obviously wrong from a reliable source should be more likely to be dismissed than one that is subtly wrong. Bob is smart and usually correct—further attention to a mostly-correct idea of his is unlikely to improve it. By contrast I think an obviously wrong idea is a big red flag that something has obviously gone wrong.
I may be missing something obvious, but I’m having a hard time imagining how to distinguish in practice between a policy against providing attention to bad ideas, and a policy against providing attention to idea-generating ideas. This seems self-defeating.