I think there’s probably a fundamental limit to how good the ranking could be. For one thing, the people coming up with the rating system would probably be considered “intellectuals”. So who rates the raters?
But it seems very possible to get better than we are now. Currently the ranking system is mostly gatekeeping and social signaling.
Agreed there’s a limit. It’s hard. But, to be fair, so are challenges like qualifying students, government officials, engineers, doctors, lawyers, smart phones, movies, books.
Around “who rates the raters”, the thought is that:
First, the raters should rate themselves.
There should be a decentralized pool of raters, each of which rates each other.
There are also methods that raters could use to provide additional verification, but that’s for another post.
I like the overlapping webs of trust idea that there’s no central authority, so each user just has to trust someone in order to get ratings from the system. If you can trust at least one other person, then you can get their rankings on who else has good thinking, and then integrate those people’s rankings, etc.
Of course, it all remains unfortunately very subjective. No ground truth comes in to help decide who was actually right, unlike in a betting market.
Ratings will change over time, and a formula could reward those who spot good intellectuals early (the analogy being that your ratings are like an investment portfolio).
I think there’s probably a fundamental limit to how good the ranking could be. For one thing, the people coming up with the rating system would probably be considered “intellectuals”. So who rates the raters?
But it seems very possible to get better than we are now. Currently the ranking system is mostly gatekeeping and social signaling.
Agreed there’s a limit. It’s hard. But, to be fair, so are challenges like qualifying students, government officials, engineers, doctors, lawyers, smart phones, movies, books.
Around “who rates the raters”, the thought is that:
First, the raters should rate themselves.
There should be a decentralized pool of raters, each of which rates each other.
There are also methods that raters could use to provide additional verification, but that’s for another post.
I like the overlapping webs of trust idea that there’s no central authority, so each user just has to trust someone in order to get ratings from the system. If you can trust at least one other person, then you can get their rankings on who else has good thinking, and then integrate those people’s rankings, etc.
Of course, it all remains unfortunately very subjective. No ground truth comes in to help decide who was actually right, unlike in a betting market.
Ratings will change over time, and a formula could reward those who spot good intellectuals early (the analogy being that your ratings are like an investment portfolio).