I really like this post! I have a concerned intuition around ‘sure, the first example in this post seems legit, but I don’t think this should actually update anything in my worldview, for the real-life situations where I actively think about Bayes Rule + epistemics’. And I definitely don’t agree with your example about top 1% traders. My attempt to put this into words:
1. Strong evidence is rarely independent. Hearing you say ‘my name is Mark’ to person A might be 20,000:1 odds, but hearing you then say it to person B is like 10:1 tops. Most hypotheses that explain the first event well, also explain the second event well. So while the first sample contains the most information, the second sample contains way less. Making this idea much less exciting.
It’s much easier to get to middling probabilities than high probabilities. This makes sense, because I’m only going to explicitly consider the odds of <100 hypotheses for most questions, so a hypothesis with say <1% probability isn’t likely to be worth thinking about. But to get to 99% it needs to defeat all of the other ones too
Eg, in the ‘top 1% of traders’ example, it might be easy to be confident I’m above the 90th percentile, but much harder to move beyond that.
2. This gets much messier when I’m facing an adversarial process. If you say ‘my name is Mark Xu, want to bet about what’s on my driver’s license’ this is much worse evidence because I now face adverse selection. Many real-world problems I care about involve other people applying optimisation pressure to shape the evidence I see, and some of this involves adversarial potential. The world does not tend to involve people trying to deceive me about world capitals.
An adversarial process could be someone else trying to trick me, but it could also be a cognitive bias I have, eg ‘I want to believe that I am an awesome, well-calibrated person’. It could also be selection bias—what is the process that generated the evidence I see?
3. Some questions have obvious answers, others don’t. The questions most worth thinking about are rarely the ones that are obvious. The ones where I can access strong evidence easily are much less likely to be worth thinking about. If someone disagrees with me, that’s at least weak evidence against the existence of strong evidence.
I really like this post! I have a concerned intuition around ‘sure, the first example in this post seems legit, but I don’t think this should actually update anything in my worldview, for the real-life situations where I actively think about Bayes Rule + epistemics’. And I definitely don’t agree with your example about top 1% traders. My attempt to put this into words:
1. Strong evidence is rarely independent. Hearing you say ‘my name is Mark’ to person A might be 20,000:1 odds, but hearing you then say it to person B is like 10:1 tops. Most hypotheses that explain the first event well, also explain the second event well. So while the first sample contains the most information, the second sample contains way less. Making this idea much less exciting.
It’s much easier to get to middling probabilities than high probabilities. This makes sense, because I’m only going to explicitly consider the odds of <100 hypotheses for most questions, so a hypothesis with say <1% probability isn’t likely to be worth thinking about. But to get to 99% it needs to defeat all of the other ones too
Eg, in the ‘top 1% of traders’ example, it might be easy to be confident I’m above the 90th percentile, but much harder to move beyond that.
2. This gets much messier when I’m facing an adversarial process. If you say ‘my name is Mark Xu, want to bet about what’s on my driver’s license’ this is much worse evidence because I now face adverse selection. Many real-world problems I care about involve other people applying optimisation pressure to shape the evidence I see, and some of this involves adversarial potential. The world does not tend to involve people trying to deceive me about world capitals.
An adversarial process could be someone else trying to trick me, but it could also be a cognitive bias I have, eg ‘I want to believe that I am an awesome, well-calibrated person’. It could also be selection bias—what is the process that generated the evidence I see?
3. Some questions have obvious answers, others don’t. The questions most worth thinking about are rarely the ones that are obvious. The ones where I can access strong evidence easily are much less likely to be worth thinking about. If someone disagrees with me, that’s at least weak evidence against the existence of strong evidence.