Epistemic status: Rambly. Perhaps incoherent. That’s why this is a shortform post. I’m not really sure how to explain this well. I also sense that this is a topic that is studied by academics and might be a thing already.
I was just listening to Ben Taylor’s recent podcast on the top 75 NBA players of all time, and a thought started to crystalize for me that I always have wanted to develop. For people who don’t know him (everyone reading this?), his epistemics are quite good. If you want to see good epistemics applied to basketball, read his series of posts on The 40 Best Careers in NBA History.
Anyway, at the beginning of the podcast, Taylor started to talk about something that was bugging him. Previously, on the 50th anniversary of the league in 1996, a bunch of people voted on a list of the top 50 players in NBA history. Now it is the 75th anniversary of the league, so a different set of people voted on the top 75 players in NBA history. The new list basically took the old list of 50 and added 25 new players. But Taylor was saying it probably shouldn’t be like this. One reason is because our understanding of the game of basketball has evolved since 1996, so who we thought were the top 50 then probably had some flaws. Also, it’s not like the voting body in 1996 was particularly smart. As Taylor nicely puts it, they weren’t a bunch of “basketball PhDs (if that were a thing)”, they were random journalists, players, and coaches, people who aren’t necessarily well qualified to be voting on this. For example, they placed a ton of value on how many points you scored, but not nearly enough value on how efficiently you scored those points.
Later in the podcast they were analyzing various players and the guy he had on as a guest, Cody, said how one player was voted to a lot of all star games. But Taylor said that while this is true, he doesn’t really trust the people who voted on all start games back in the 1960s or whenever it was (not that people are good at voting on all star games now). This got me thinking about something. Does it make sense to look at awards like all star games, MVP voting and all NBA team voting (top 15 players in the league basically)? Well, by doing so, you are incorporating the opinion of various other experts. But I see two problems here.
How smart are those experts? Sometimes the expert opinion is actually quite flawed, and Taylor makes a good point that here this is the case.
In looking at the opinion of those experts, I think that you are committing one of those crimes that can send you to rationalist prison. I think that you are double counting the evidence! Here’s what I mean. I think that for these expert opinions, the experts rely a lot on what the other experts think. For example, in the podcast they were talking about Bob Cousy vs Bill Sharman. Cousy is considered a legend, whereas Sharman is a guy who was very good, but never became a household name. But Taylor was saying how he thinks Sharman might have actually been better than Cousy. But he just couldn’t bring himself to actually place Sharman over Cousy in his list. I think part of that is because it is hard to deviate from majority opinion that much. So I think that is an example where you base your opinion on what others think. Not 100%, but some percentage.
But isn’t that double counting? As a simplification, imagine that Alice arrives at her opinion without the influence of others, and then Bob’s opinion is 50% based on what Alice thinks and 50% based on what his gears level models output. That seems to me like it should count as 1.5 data points, not 2. I think this becomes more apparent as you add more people. Imagine that Carold, Dave and Erin all do the same thing as Bob. Ie. each of them is basing 50% of their opinion on what Alice thinks. Should that count as 5 data points or 3? What if all of them were basing it 99% on what Alice thinks. Should that count as 5 data points or 1.04? You could argue perhaps that 1.04 is too low, but arguing that it is 5 really seems like it is too high. To make the point even more clear, what if there were 99 people who were 99% basing their opinion off of Alice. Would you say, “well, 100 people all believe X, so it’s probably true”? No! There’s only one person that believes X and 99 people who trust her.
This feels to me like it is actually a pretty important point. In looking at what consensus opinion is, or what the crowd thinks, once you filter out the double counting, it becomes a good deal less strong.
On the other hand, there are other things to think about. For example, if the consensus believes X and you can present good evidence of ~X, but in fact Y, then there is prestige to be gained. And if no one came around and said “Hey! I have evidence of ~X, but in fact Y!”, well, absence of evidence is evidence of absence. In worlds where Y is true, given the incentive of prestige, we would expect someone to come around and say it. This depends on the community though. Maybe it’s too hard to present evidence. For example, in basketball it’s hard to measure the impact of defense. Or maybe the community just isn’t smart enough or set up properly to provide the prestige. Eg. if I had a brilliant idea about basketball, I’m not really sure where I can go to present it and receive prestige.
Edit:
Would you say, “well, 100 people all believe X, so it’s probably true”? No! There’s only one person that believes X and 99 people who trust her.
Well, I guess the fact that so many people trust her means that we should place more weight on her opinion. But saying “I believe X because someone who I have a lot of trust in believes X” is different from saying “I believe X because all 100 people who thought about this also believe X”.
Epistemic status: Rambly. Perhaps incoherent. That’s why this is a shortform post. I’m not really sure how to explain this well. I also sense that this is a topic that is studied by academics and might be a thing already.
I was just listening to Ben Taylor’s recent podcast on the top 75 NBA players of all time, and a thought started to crystalize for me that I always have wanted to develop. For people who don’t know him (everyone reading this?), his epistemics are quite good. If you want to see good epistemics applied to basketball, read his series of posts on The 40 Best Careers in NBA History.
Anyway, at the beginning of the podcast, Taylor started to talk about something that was bugging him. Previously, on the 50th anniversary of the league in 1996, a bunch of people voted on a list of the top 50 players in NBA history. Now it is the 75th anniversary of the league, so a different set of people voted on the top 75 players in NBA history. The new list basically took the old list of 50 and added 25 new players. But Taylor was saying it probably shouldn’t be like this. One reason is because our understanding of the game of basketball has evolved since 1996, so who we thought were the top 50 then probably had some flaws. Also, it’s not like the voting body in 1996 was particularly smart. As Taylor nicely puts it, they weren’t a bunch of “basketball PhDs (if that were a thing)”, they were random journalists, players, and coaches, people who aren’t necessarily well qualified to be voting on this. For example, they placed a ton of value on how many points you scored, but not nearly enough value on how efficiently you scored those points.
Later in the podcast they were analyzing various players and the guy he had on as a guest, Cody, said how one player was voted to a lot of all star games. But Taylor said that while this is true, he doesn’t really trust the people who voted on all start games back in the 1960s or whenever it was (not that people are good at voting on all star games now). This got me thinking about something. Does it make sense to look at awards like all star games, MVP voting and all NBA team voting (top 15 players in the league basically)? Well, by doing so, you are incorporating the opinion of various other experts. But I see two problems here.
How smart are those experts? Sometimes the expert opinion is actually quite flawed, and Taylor makes a good point that here this is the case.
In looking at the opinion of those experts, I think that you are committing one of those crimes that can send you to rationalist prison. I think that you are double counting the evidence! Here’s what I mean. I think that for these expert opinions, the experts rely a lot on what the other experts think. For example, in the podcast they were talking about Bob Cousy vs Bill Sharman. Cousy is considered a legend, whereas Sharman is a guy who was very good, but never became a household name. But Taylor was saying how he thinks Sharman might have actually been better than Cousy. But he just couldn’t bring himself to actually place Sharman over Cousy in his list. I think part of that is because it is hard to deviate from majority opinion that much. So I think that is an example where you base your opinion on what others think. Not 100%, but some percentage.
But isn’t that double counting? As a simplification, imagine that Alice arrives at her opinion without the influence of others, and then Bob’s opinion is 50% based on what Alice thinks and 50% based on what his gears level models output. That seems to me like it should count as 1.5 data points, not 2. I think this becomes more apparent as you add more people. Imagine that Carold, Dave and Erin all do the same thing as Bob. Ie. each of them is basing 50% of their opinion on what Alice thinks. Should that count as 5 data points or 3? What if all of them were basing it 99% on what Alice thinks. Should that count as 5 data points or 1.04? You could argue perhaps that 1.04 is too low, but arguing that it is 5 really seems like it is too high. To make the point even more clear, what if there were 99 people who were 99% basing their opinion off of Alice. Would you say, “well, 100 people all believe X, so it’s probably true”? No! There’s only one person that believes X and 99 people who trust her.
This feels to me like it is actually a pretty important point. In looking at what consensus opinion is, or what the crowd thinks, once you filter out the double counting, it becomes a good deal less strong.
On the other hand, there are other things to think about. For example, if the consensus believes X and you can present good evidence of ~X, but in fact Y, then there is prestige to be gained. And if no one came around and said “Hey! I have evidence of ~X, but in fact Y!”, well, absence of evidence is evidence of absence. In worlds where Y is true, given the incentive of prestige, we would expect someone to come around and say it. This depends on the community though. Maybe it’s too hard to present evidence. For example, in basketball it’s hard to measure the impact of defense. Or maybe the community just isn’t smart enough or set up properly to provide the prestige. Eg. if I had a brilliant idea about basketball, I’m not really sure where I can go to present it and receive prestige.
Edit:
Well, I guess the fact that so many people trust her means that we should place more weight on her opinion. But saying “I believe X because someone who I have a lot of trust in believes X” is different from saying “I believe X because all 100 people who thought about this also believe X”.