The issue is that probabilities for something that will either happen or not don’t really make sense in a literal way (any single macro-scale event has ~0% of ~100% chance of happening).
I think when EY says he had a 10% chance of HPMoR being successful, the claim should be taken in the context of calibration, not that he’s actually going to attempt it 10 times and then see how often he succeeds:
The issue is that probabilities for something that will either happen or not don’t really make sense in a literal way (any single macro-scale event has ~0% of ~100% chance of happening).
I think when EY says he had a 10% chance of HPMoR being successful, the claim should be taken in the context of calibration, not that he’s actually going to attempt it 10 times and then see how often he succeeds:
https://www.lesswrong.com/tag/calibration
To see if it’s accurate, you’d need to take some other predictions in his 10% probability bucket, find out how often they all happened, and then see how far that is from 10%. I’m not sure if EY does this, but you can see an example from Scott here: https://slatestarcodex.com/2020/04/08/2019-predictions-calibration-results/