An analogy could be Elon Musk. He’s done great things that I personally am absolutely incapable of. And he does deserve praise for those things. And indeed, Eliezer was a big influence on me. But he gives extreme predictions that probably won’t age well.
Him starting this site and writing a million words about rationality is wonderful and outstanding. But do you think it predicts forecasting performance nearly as well as proper forecasting actual performance? I claim it doesn’t come anywhere near as good of a predictive factor than just making some actual forecasts and seeing what happens, and I don’t see the opposing position holding well at all. You can argue that “we care about other things too than just forecasting ability” but in this thread I am specifically referring to his implied forecasting accuracy, not his other accomplishments. The way you’re referring to Bayes points here doesn’t seem workable or coherent, any more than Musk Points tell me his predictions are accurate.
If people (like Musk) are continually successful, you know they’re doing something right. One-off sucess can be survivorship bias, but the odds of having continued success by mere happenstance get very low, very quickly.
When I call that “getting Bayes points”, what I mean is that if someone demonstrates good long-term decision-making, or gets good long-term outcomes, or arrives at an epistemic state more quickly, you know they’re doing some kind of implicit forecasting correctly, because long-term decisions in the present are evaluated by the reality of the future.
This whole discussion vaguely reminds me of conflicts between e.g. boxing and mixed martial arts (MMA) advocates: the former has more rules, while the latter is more flexible, so how can two competitors from their respective disciplines determine who of them is the better martial artist? They could both compete in boxing, or both compete in MMA. Or they could decide not to bother, and remain in their own arenas.
I guess it seems to you like Yudkowsky has encroached on the Metaculus arena but isn’t playing by the Metaculus rules?
No, success and fame are not very informative about forecasting accuracy. Yes they are strongly indicative of other competencies, but you shouldn’t mix those in with our measure of forecasting. And nebulous unscorable statements don’t at all work as “success”, too cherry-picked and unworkable. Musk is famously uncalibrated with famously bad timeline predictions in his domain! I don’t think you should be glossing over that in this context by saying “Well he’s successful...”
If we are talking about measuring forecasting performance, then it’s more like comparing tournament Karate with trench warfare.
If people (like Musk) are continually successful, you know they’re doing something right. One-off sucess can be survivorship bias, but the odds of having continued success by mere happenstance get very low, very quickly
Unless success breeds more success, irrespective of other factors.
An analogy could be Elon Musk. He’s done great things that I personally am absolutely incapable of. And he does deserve praise for those things. And indeed, Eliezer was a big influence on me. But he gives extreme predictions that probably won’t age well.
Him starting this site and writing a million words about rationality is wonderful and outstanding. But do you think it predicts forecasting performance nearly as well as proper forecasting actual performance? I claim it doesn’t come anywhere near as good of a predictive factor than just making some actual forecasts and seeing what happens, and I don’t see the opposing position holding well at all. You can argue that “we care about other things too than just forecasting ability” but in this thread I am specifically referring to his implied forecasting accuracy, not his other accomplishments. The way you’re referring to Bayes points here doesn’t seem workable or coherent, any more than Musk Points tell me his predictions are accurate.
If people (like Musk) are continually successful, you know they’re doing something right. One-off sucess can be survivorship bias, but the odds of having continued success by mere happenstance get very low, very quickly.
When I call that “getting Bayes points”, what I mean is that if someone demonstrates good long-term decision-making, or gets good long-term outcomes, or arrives at an epistemic state more quickly, you know they’re doing some kind of implicit forecasting correctly, because long-term decisions in the present are evaluated by the reality of the future.
This whole discussion vaguely reminds me of conflicts between e.g. boxing and mixed martial arts (MMA) advocates: the former has more rules, while the latter is more flexible, so how can two competitors from their respective disciplines determine who of them is the better martial artist? They could both compete in boxing, or both compete in MMA. Or they could decide not to bother, and remain in their own arenas.
I guess it seems to you like Yudkowsky has encroached on the Metaculus arena but isn’t playing by the Metaculus rules?
No, success and fame are not very informative about forecasting accuracy. Yes they are strongly indicative of other competencies, but you shouldn’t mix those in with our measure of forecasting. And nebulous unscorable statements don’t at all work as “success”, too cherry-picked and unworkable. Musk is famously uncalibrated with famously bad timeline predictions in his domain! I don’t think you should be glossing over that in this context by saying “Well he’s successful...”
If we are talking about measuring forecasting performance, then it’s more like comparing tournament Karate with trench warfare.
I’m going to steal the tournament karate and trench warfare analogy. Thanks.
Unless success breeds more success, irrespective of other factors.