We should learn how to identify trustworthy experts. Is there some general way, or do you have to rely on specific rules for each category of knowledge?
Two examples of rules are never trust someone’s advice about which specific stocks you should buy unless the advisor has material non-public information, and be extremely skeptical of statistical evidence presented in Women Studies’ journals. Although both rules are probably true you obviously couldn’t trust financial advisers or Women Studies’ professors to give them to you.
Prediction markets can forecast the accuracy or fame of purported experts. But preferably you’d accept the market estimate on your question and so not need to know who is an expert.
This is ofcourse exactly the point.
People will be people. The solution is to depersonalize, not pick some fine guy and put faith in him.
Trying to find out which experts to trust feels to me like asking which tyrants can be best trusted.
Experts are valuable (unlike tyrants), but is better be placed in a market, rather than in individual people.
Obviously it helps if the experts are required to make predictions that are scoreable. Over time, we could examine both the track records of individual experts and entire disciplines in correctly predicting outcomes. Ideally, we would want to test these predictions against those made by non-experts, to see how much value the expertise is actually adding.
Another proposal, which I raised on a previous comment thread, is to collect third-party credibility assessments in centralized databases. We could collect the rates at which expert witnesses are permitted to testify at trial and the rate at which their conclusions are accepted or rejected by courts, for instance. We could similarly track the frequency with which authors have their articles accepted or rejected by journals engaged in blind peer-review (although if the review is less than truly blind, the data might be a better indication of status than of expertise, to the degree the two are not correlated). Finally, citation counts could serve as a weak proxy for trustworthiness, to the degree the citations are from recognized experts and indicate approval.
The suggestions from the second paragraph all seem rather incestuous. Propagating trust is great but it should flow from a trustworthy fountain. Those designated “experts” need some non-incestuous test as their foundation (a la your first paragraph).
Internal credibility is of little use when we want to compare the credentials of experts in widely differing fields. But is is useful if we want to know whether someone is trusted in their own field. Now suppose that we have enough information about a field to decide that good work in that field generally deserves some of our trust (even if the field’s practices fall short of the ideal). By tracking internal credibility, we have picked out useful sources of information.
Note too that this method could be useful if we think a field is epistemically rotten. If someone is especially trusted by literary theorists, we might want to downgrade our trust in them, solely on that basis.
So the two inquiries complement each other: We want to be able to grade different institutions and fields on the basis of overall trustworthiness, and then pick out particularly good experts from within those fields we trust in general.
p.s. Peer review and citation counting are probably incestuous, but I don’t think the charge makes sense in the expert witness evaluation context.
We should learn how to identify trustworthy experts. Is there some general way, or do you have to rely on specific rules for each category of knowledge?
Two examples of rules are never trust someone’s advice about which specific stocks you should buy unless the advisor has material non-public information, and be extremely skeptical of statistical evidence presented in Women Studies’ journals. Although both rules are probably true you obviously couldn’t trust financial advisers or Women Studies’ professors to give them to you.
Have you evaluated statistical evidence in Women Studies’ journals?
Prediction markets can forecast the accuracy or fame of purported experts. But preferably you’d accept the market estimate on your question and so not need to know who is an expert.
This is ofcourse exactly the point. People will be people. The solution is to depersonalize, not pick some fine guy and put faith in him. Trying to find out which experts to trust feels to me like asking which tyrants can be best trusted. Experts are valuable (unlike tyrants), but is better be placed in a market, rather than in individual people.
Obviously it helps if the experts are required to make predictions that are scoreable. Over time, we could examine both the track records of individual experts and entire disciplines in correctly predicting outcomes. Ideally, we would want to test these predictions against those made by non-experts, to see how much value the expertise is actually adding.
Another proposal, which I raised on a previous comment thread, is to collect third-party credibility assessments in centralized databases. We could collect the rates at which expert witnesses are permitted to testify at trial and the rate at which their conclusions are accepted or rejected by courts, for instance. We could similarly track the frequency with which authors have their articles accepted or rejected by journals engaged in blind peer-review (although if the review is less than truly blind, the data might be a better indication of status than of expertise, to the degree the two are not correlated). Finally, citation counts could serve as a weak proxy for trustworthiness, to the degree the citations are from recognized experts and indicate approval.
The suggestions from the second paragraph all seem rather incestuous. Propagating trust is great but it should flow from a trustworthy fountain. Those designated “experts” need some non-incestuous test as their foundation (a la your first paragraph).
Internal credibility is of little use when we want to compare the credentials of experts in widely differing fields. But is is useful if we want to know whether someone is trusted in their own field. Now suppose that we have enough information about a field to decide that good work in that field generally deserves some of our trust (even if the field’s practices fall short of the ideal). By tracking internal credibility, we have picked out useful sources of information.
Note too that this method could be useful if we think a field is epistemically rotten. If someone is especially trusted by literary theorists, we might want to downgrade our trust in them, solely on that basis.
So the two inquiries complement each other: We want to be able to grade different institutions and fields on the basis of overall trustworthiness, and then pick out particularly good experts from within those fields we trust in general.
p.s. Peer review and citation counting are probably incestuous, but I don’t think the charge makes sense in the expert witness evaluation context.