Intellectuals aren’t the ones playing the game, they’re the ones figuring out the rules of the game.
This doesn’t seem true to me. There’s relatively little systematic literature from intellectuals trying to understand what structural things make for quality intellectual standards. The majority of it seems to be arguing and discussing specific orthogonal opinions. It’s true that they “are the ones” to figure out the rules of the game, but this is a small minority of them, and for these people, it’s often a side endeavor.
In a way, this problem is just scaling up the old reputation/prestige system.
Definitely. I think the process of “evaluation standardization and openness” is a repeated one across industries and sectors. There’s a lot of value to be had in understanding the wisdom of existing informal evaluation systems and scaling them into formal ones.
Maybe some kind of social app inspired by liquid democracy/quadratic voting might work?
I imagine the space of options here is quite vast. This option seems like a neat choice. Perhaps several distinct efforts could be tried.
What metrics do you have in mind?
I have some rough ideas, want to brainstorm on this a bit more before writing more.
I maybe wasn’t clear about what I meant by ‘the game.’ I didn’t mean how to be a good public intellectual but rather the broader ‘game’ of coming up with new ideas and figuring things out.
One important metric I use to judge public intellectuals is whether they share my views, and start from similar assumptions. It’s obviously important to not filter too strongly on this or you’re never going to hear anything that challenges your beliefs, but it still makes sense to discount the views of people who hold beliefs you think are false. But you obviously can’t build an objective metric based on how much someone agrees with you.
The issue is that one of the most important metrics I use to quickly measure the merits of an intellectual is inherently subjective. You can’t have your system based on adjudicating the truth of disputed claims.
The illegibility and opacity of intra-group status was doing something really important – it created space where everyone could belong. The light of day poisons the magic. It’s a delightful paradox: a group that exists to confer social status will fall apart the minute that relative status within the group is made explicit. There’s real social value in the ambiguity: the more there is, the more people can plausibly join, before it fractures into subgroups.
There is probably a lot to be improved with current evaluation systems, but one always has to be careful with those fences.
I think ranking systems can be very powerful (as would make sense for something I’m claiming to be important), and can be quite bad if done poorly (arguably, current uses of citations are quite poor). Being careful matters a lot.
Thanks! Some very quick thoughts:
This doesn’t seem true to me. There’s relatively little systematic literature from intellectuals trying to understand what structural things make for quality intellectual standards. The majority of it seems to be arguing and discussing specific orthogonal opinions. It’s true that they “are the ones” to figure out the rules of the game, but this is a small minority of them, and for these people, it’s often a side endeavor.
Definitely. I think the process of “evaluation standardization and openness” is a repeated one across industries and sectors. There’s a lot of value to be had in understanding the wisdom of existing informal evaluation systems and scaling them into formal ones.
I imagine the space of options here is quite vast. This option seems like a neat choice. Perhaps several distinct efforts could be tried.
I have some rough ideas, want to brainstorm on this a bit more before writing more.
I maybe wasn’t clear about what I meant by ‘the game.’ I didn’t mean how to be a good public intellectual but rather the broader ‘game’ of coming up with new ideas and figuring things out.
One important metric I use to judge public intellectuals is whether they share my views, and start from similar assumptions. It’s obviously important to not filter too strongly on this or you’re never going to hear anything that challenges your beliefs, but it still makes sense to discount the views of people who hold beliefs you think are false. But you obviously can’t build an objective metric based on how much someone agrees with you.
The issue is that one of the most important metrics I use to quickly measure the merits of an intellectual is inherently subjective. You can’t have your system based on adjudicating the truth of disputed claims.
One consideration to keep in mind though is that there might also be a social function in the informality and vagueness of many evaluation systems.
From Social Capital in Silicon Valley:
There is probably a lot to be improved with current evaluation systems, but one always has to be careful with those fences.
Good points, thanks.
I think ranking systems can be very powerful (as would make sense for something I’m claiming to be important), and can be quite bad if done poorly (arguably, current uses of citations are quite poor). Being careful matters a lot.