The question is, who counts as terrible? What sorts of lapses in rigorous thinking are just normal human fallibility and which make a person seriously untrustworthy?
If at all possible you need to look at the person’s actual track record. Everyone has views you will find incredible stupid or immoral. Even the very wise make mistakes that look obvious to us. In addition its possible that the person engaging in ‘obvious folly’ actually has a better understanding of the situation than we do. You need to look at a representiive sample and weigh their successes and failures in a systematic way. If you cannot access their history you still need to get an actual sample. If you were judging programmers something like a triplebyte interview is a reasonable way to get info. Trying to weigh the stupid things they have said about programming is a very bad method. Without a real sample you are making a character judgement under huge uncertainty.
Of course we are Bayesians. If forced to come up with an estimate despite uncertainty we can do it. But its important to do the updating correctly. Say a person’s stupidest belief, that you know about, is X. The relevant odds ratio is not:
P(stupidest belief I learn about is at least as stupid as X| trustworthy)/P(stupidest beleif I learn about is at least as stupid as X|untrustowrthy)
You can try to estimate similar odds ratios for collections of stupid beleifs. This method isnt as good as trying to conditioning on both unusually wise and unusually stupid beliefs. But if you are going to judge based on stupid beliefs you have to do it correctly. Keep in mind that the more ‘open’ a person is the more likely you are to learn their stupid beleifs. So you need to facor in an estimate of their openness towards you.
If at all possible you need to look at the person’s actual track record. Everyone has views you will find incredible stupid or immoral. Even the very wise make mistakes that look obvious to us. In addition its possible that the person engaging in ‘obvious folly’ actually has a better understanding of the situation than we do. You need to look at a representiive sample and weigh their successes and failures in a systematic way. If you cannot access their history you still need to get an actual sample. If you were judging programmers something like a triplebyte interview is a reasonable way to get info. Trying to weigh the stupid things they have said about programming is a very bad method. Without a real sample you are making a character judgement under huge uncertainty.
Of course we are Bayesians. If forced to come up with an estimate despite uncertainty we can do it. But its important to do the updating correctly. Say a person’s stupidest belief, that you know about, is X. The relevant odds ratio is not:
P(beleives X| trustworthy)/P(beleives X|untrustorthy)
Instead you have to look at:
P(stupidest belief I learn about is at least as stupid as X| trustworthy)/P(stupidest beleif I learn about is at least as stupid as X|untrustowrthy)
You can try to estimate similar odds ratios for collections of stupid beleifs. This method isnt as good as trying to conditioning on both unusually wise and unusually stupid beliefs. But if you are going to judge based on stupid beliefs you have to do it correctly. Keep in mind that the more ‘open’ a person is the more likely you are to learn their stupid beleifs. So you need to facor in an estimate of their openness towards you.